PRE2022 3 Group2: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(53 intermediate revisions by 6 users not shown)
Line 34: Line 34:
The use of robotics in disaster response operations has gained significant attention in recent years. In particular, robotics is being used to navigate and locate survivors through complex terrain. However, current rescue robots have multiple limitations which make it difficult for them to be used successfully. This paper proposes and argues for the use of low-cost vine robots as a promising technology for search and rescue missions in urban areas affected by earthquakes. Vine robots are small, flexible and lightweight robots that can navigate through tight spaces and confined areas, making them ideal for searching collapsed buildings for survivors. They are relatively cheaper than the current state of the art search and rescue robots. However, vine robots have not been implemented into real-life search and rescue missions due to various limitations. In this paper, the limitations are addressed, specifically with regards to the path finding and localization capabilities. This includes research into components that support the vine robot in its capabilities such as sensors, where a comparison is made to figure out what type of sensor would be best suited for extending the vine robots capability to be able to localize survivors. Furthermore, a simulation is made to investigate a localization algorithm that includes path finding capabilities, to conclude the usefulness of using such a robot during a search and rescue mission when looking through a collapsed building. The findings of this paper conclude that further development needs to take place for the vine robot technology to be used successfully in search and rescue missions.  
The use of robotics in disaster response operations has gained significant attention in recent years. In particular, robotics is being used to navigate and locate survivors through complex terrain. However, current rescue robots have multiple limitations which make it difficult for them to be used successfully. This paper proposes and argues for the use of low-cost vine robots as a promising technology for search and rescue missions in urban areas affected by earthquakes. Vine robots are small, flexible and lightweight robots that can navigate through tight spaces and confined areas, making them ideal for searching collapsed buildings for survivors. They are relatively cheaper than the current state of the art search and rescue robots. However, vine robots have not been implemented into real-life search and rescue missions due to various limitations. In this paper, the limitations are addressed, specifically with regards to the path finding and localization capabilities. This includes research into components that support the vine robot in its capabilities such as sensors, where a comparison is made to figure out what type of sensor would be best suited for extending the vine robots capability to be able to localize survivors. Furthermore, a simulation is made to investigate a localization algorithm that includes path finding capabilities, to conclude the usefulness of using such a robot during a search and rescue mission when looking through a collapsed building. The findings of this paper conclude that further development needs to take place for the vine robot technology to be used successfully in search and rescue missions.  


==Introduction and project goals==
==Introduction and research aims==
“Two large earthquakes struck the southeastern region of Turkey near the border with Syria on Monday, killing thousands and toppling residential buildings across the region.” (AJLabs, 2023) The earthquakes were both above 7.5 on the Richter scale, which caused buildings to be displaced from foundations with people still in them. Some people survived the fall when a building collapsed, but were trapped in all of the rubble.
“Two large earthquakes struck the southeastern region of Turkey near the border with Syria on Monday, killing thousands and toppling residential buildings across the region.” (AJLabs, 2023) The earthquakes were both above 7.5 on the Richter scale, which caused buildings to be displaced from foundations with people still in them. Some people survived the fall when a building collapsed, but were trapped in all of the rubble.


Earthquakes are one of the most devastating natural disasters that can occur in urban areas, leading to significant damage to infrastructure, loss of life and displacement of communities. In recent years, search and rescue operations have become increasingly important in the aftermath of earthquakes. These operations aim to locate victims trapped under collapsed buildings and provide them with the necessary medical care and assistance. However, these operations can be challenging due to the complexity and dangers associated with navigating through the rubble of damaged buildings.
Earthquakes are one of the most devastating natural disasters that can occur in urban areas, leading to significant damage to infrastructure, loss of life and displacement of communities. In recent years, search and rescue operations have become increasingly important in the aftermath of earthquakes. These operations aim to locate victims trapped under collapsed buildings and provide them with the necessary medical care and assistance. However, these operations can be challenging due to the complexity and dangers associated with navigating through the rubble of damaged buildings. To address these challenges, technology has emerged as promising for search and rescue operations in urban areas affected by earthquakes. Technology has the potential to significantly improve the efficiency and effectiveness of search and rescue operations, as well as reduce the risk to human rescuers.  


To address these challenges, technology has emerged as a promising for search and rescue operations in urban areas affected by earthquakes. Vine robots are small lightweight and flexible robots that can crawl slither or climb through tight spaces and confined areas making them ideal for searching collapse buildings for survivors. These robots can be equipped with camera sensors and other devices that can aid in localization of victims.
This paper provides an overview of the usage technology and robotics in the localization of victims of earthquakes that lead to infrastructure damage, specifically in urban areas. It will further go into the design and capabilities of robotics, highlighting potential advantages over traditional search and rescue methods. The paper will also discuss challenges associated with the usage of technology and robotics, such as limited battery life, difficulties in controlling the robot in complex environments and the need for specialized training for operators.


The usage of low-cost vine robots in earthquake response after it has the potential to significantly improve the efficiency and effectiveness of search and rescue operations, as well as reduce the risk to human rescuers. This paper provides an overview of the usage of low-cost vine robots in the localization of victims of earthquakes that lead to infrastructure damage, specifically in urban areas.
The research aims addressed in this paper include:


This paper presents a comprehensive review of the literature on the usage of low-cost vine robots in earthquake response efforts. The paper provides an overview of the design and capabilities of vine robots, highlighting their potential advantages over traditional search and rescue methods. This paper also discusses the challenges associated with the usage of vine robots, such as limited battery life, difficulties in controlling the robot in complex environments and the need for specialized training for operators.
#What are the potential advantages and challenges associated with the usage of low-cost robotics and technology in earthquake response efforts?
 
#How have robots been deployed in recent earthquake disasters and what has their impact on search and rescue operations been?
To support the analysis, this paper draws on case studies of the deployment of vine robot in earthquake disasters, including the 2010 Haiti earthquake, the 2017 Mexico City earthquake and the 2019 Albania earthquake. The case studies serve to illustrate the effectiveness of vine robots in localizing victims as well as the challenges urban rescue teams face by utilizing this technology.
#What are the future prospects of low-cost robots in earthquake response efforts?
 
#How does a localization and path-planning algorithm for urban search and rescue look like?
The research questions addressed in this paper include:
 
#What are the design and capabilities of  low-cost vine robots and how do they differ from 4-0 search and rescue methods?
#What are the potential advantages and challenges associated with the usage of low-cost vine robots in earthquake response efforts?
#How have vine robots been deployed in recent earthquake disasters and what has their impact on search and rescue operations been?
#What are the future prospects of low-cost vine robots in earthquake response efforts?


==State-of-the-art literature==
==State-of-the-art literature==
Line 66: Line 60:


'''Design of four-arm four-crawler disaster response robot OCTOPUS'''
'''Design of four-arm four-crawler disaster response robot OCTOPUS'''
 
[[File:Four-arm disaster response robot OCTOPUS.gif|thumb|Four-arm disaster response robot OCTOPUS (Kamezaki et al., 2016).]]
The OCTOPUS robot, presented in this paper, boasts four arms and four crawlers, providing exceptional mobility and flexibility. Its arms are engineered with multiple degrees of freedom, enabling the robot to execute intricate tasks such as lifting heavy objects and opening doors. Furthermore, the crawlers are designed to offer stability and traction, which allow the robot to move seamlessly on irregular and slippery surfaces (Kamezaki et al., 2016).
The OCTOPUS robot, presented in this paper, boasts four arms and four crawlers, providing exceptional mobility and flexibility. Its arms are engineered with multiple degrees of freedom, enabling the robot to execute intricate tasks such as lifting heavy objects and opening doors. Furthermore, the crawlers are designed to offer stability and traction, which allow the robot to move seamlessly on irregular and slippery surfaces (Kamezaki et al., 2016).


Line 79: Line 73:


'''Search and Rescue System for Alive Human Detection by Semi-Autonomous Mobile Rescue Robot'''
'''Search and Rescue System for Alive Human Detection by Semi-Autonomous Mobile Rescue Robot'''
 
[[File:7856489-fig-8-source-small.gif|thumb|Search and rescue system for alive human detection by semi-autonomous mobile rescue robot (Uddin & Islam 2016).]]
This study introduces a cheap robot designed for detecting humans in perilous rescue missions. The paper presents several system block diagrams and flowcharts to demonstrate the robot's operational capabilities. The robot incorporates a PIR sensor and IP camera to detect human presence through their infrared radiation. These sensors are readily available and cost-effective compared to other urban search and rescue robots (Uddin & Islam 2016).
This study introduces a cheap robot designed for detecting humans in perilous rescue missions. The paper presents several system block diagrams and flowcharts to demonstrate the robot's operational capabilities. The robot incorporates a PIR sensor and IP camera to detect human presence through their infrared radiation. These sensors are readily available and cost-effective compared to other urban search and rescue robots (Uddin & Islam 2016).


Line 103: Line 97:


'''Robots Gear Up for Disaster Response'''
'''Robots Gear Up for Disaster Response'''
 
[[File:4399386-fig-10-source-small.gif|thumb|Active scope camera for urban search and rescue. Turning motion in narrow gaps (Hatazaki et al., 2007).]]
Although brilliant robotic technology exists, there is a need to integrate it into complete, robust systems. Furthermore, there is a need to develop sensors and other components that are smaller, stronger, and more affordable (Anthes 2010).
Although brilliant robotic technology exists, there is a need to integrate it into complete, robust systems. Furthermore, there is a need to develop sensors and other components that are smaller, stronger, and more affordable (Anthes 2010).


Line 115: Line 109:


'''Multimodality robotic systems: Integrated combined legged-aerial mobility for subterranean search-and-rescue'''
'''Multimodality robotic systems: Integrated combined legged-aerial mobility for subterranean search-and-rescue'''
 
[[File:1-s2.0-S0921889022000756-gr2.jpg|thumb|The legged-aerial explorer with its full sensor suite and the UAV carrier platform (Lindqvist et al., 2022).]]
This article discusses a Boston Dynamics Spot robot that is enhanced with a UAV carrier platform, and an autonomy sensor payload (Lindqvist et al., 2022). The paper demonstrates how to integrate hardware and software with each other and with the architecture of the robot.  
This article discusses a Boston Dynamics Spot robot that is enhanced with a UAV carrier platform, and an autonomy sensor payload (Lindqvist et al., 2022). The paper demonstrates how to integrate hardware and software with each other and with the architecture of the robot.  


Line 141: Line 135:
This book offers a comprehensive overview of rescue robotics within the broader context of emergency informatics. It provides a chronological summary of the documented deployments of robots in response to disasters and analyzes them formally. The book serves as a definitive guide to the theory and practice of previous disaster robotics (Murphy, 2017).  
This book offers a comprehensive overview of rescue robotics within the broader context of emergency informatics. It provides a chronological summary of the documented deployments of robots in response to disasters and analyzes them formally. The book serves as a definitive guide to the theory and practice of previous disaster robotics (Murphy, 2017).  


'''Application of robot technologies to the disaster sites'''  
'''Application of robot technologies to the disaster sites'''
 
[[File:Kohga 3 in the gymnasium.png|thumb|Kohga 3 in the gymnasium. Application of Robot Technologies to the Disaster Sites (Osumi, 2014).]]
For the first time during the Great East Japan Earthquake disaster, Japanese rescue robots were utilized in actual disaster sites. Their tele-operation function and ability to move on debris were essential due to the radioactivity and debris present (Osumi, 2014).
For the first time during the Great East Japan Earthquake disaster, Japanese rescue robots were utilized in actual disaster sites. Their tele-operation function and ability to move on debris were essential due to the radioactivity and debris present (Osumi, 2014).


Line 162: Line 156:


'''Drone-assisted disaster management: Finding victims via infrared camera and lidar sensor fusion'''
'''Drone-assisted disaster management: Finding victims via infrared camera and lidar sensor fusion'''
 
[[File:7941945-fig-3-source-small.gif|thumb|Drone hardware specification. Drone-assisted disaster management: Finding victims via infrared camera and lidar sensor fusion (Lee et al., 2016).]]
This article mentions that the use of drones has proven to be an efficient method to localize survivors in hard-to-reach areas (e.g., collapsed structures) (Lee et al., 2016). The paper presents a comprehensive framework for drone hardware that makes it possible to explore GPS-denied environments. Furthermore, the hokuyo lidar is used for global mapping, and the Intel RealSense for local mapping. The outcomes show that the combination of these sensors can assist USAR operations to find victims of natural disasters.
This article mentions that the use of drones has proven to be an efficient method to localize survivors in hard-to-reach areas (e.g., collapsed structures) (Lee et al., 2016). The paper presents a comprehensive framework for drone hardware that makes it possible to explore GPS-denied environments. Furthermore, the hokuyo lidar is used for global mapping, and the Intel RealSense for local mapping. The outcomes show that the combination of these sensors can assist USAR operations to find victims of natural disasters.


Line 211: Line 205:


===Users===
===Users===
Within urban search and rescue operations, several users of the vine robot can be named.
Within urban search and rescue operations, several users of technology and robotics can be named.


First, the International Search and Rescue Advisory Group (INSARAG) determines the minimum international standards for urban search and rescue (USAR) teams (INSARAG – Preparedness Response, z.d.). This organization establishes a methodology for coordination in earthquake response. Therefore, this organization will have to weigh the pros and cons of using a vine robot in USAR. If INSARAG sees the added value of using robotics in search and rescue operations, it can promote the usage, and include it in the guidelines.  
First, the International Search and Rescue Advisory Group (INSARAG) determines the minimum international standards for urban search and rescue (USAR) teams (INSARAG – Preparedness Response, z.d.). This organization establishes a methodology for coordination in earthquake response. Therefore, this organization will have to weigh the pros and cons of using robotics in USAR. If INSARAG sees the added value of using robotics in search and rescue operations, it can promote the usage, and include it in the guidelines.  


Second, governments will need to purchase all necessary equipment. For the Netherlands, Nationaal Instituut Publieke Veiligheid is the owner of all the equipment of the Dutch USAR team (Nederlands Instituut Publieke Veiligheid, 2023). This Institute will need to see the added value of the robot while taking into account the guidelines of INSARAG.  
Second, governments will need to purchase all necessary equipment. For the Netherlands, Nationaal Instituut Publieke Veiligheid is the owner of all the equipment of the Dutch USAR team (Nederlands Instituut Publieke Veiligheid, 2023). This Institute will need to see the added value of the robot while taking into account the guidelines of INSARAG.  


The third group of users consists of members of the USAR teams that will have to work with the vine robot on site. The vine robot will be used alongside other techniques that are already used right now. USAR teams are multidisciplinary and not all members of the team will come in contact with the robot (e.g., nurses or doctors). In order to properly use the vine robot, USAR members who execute the search and rescue operation will need training. For the Dutch USAR team, this training can be conducted by the Staff Officer Education, Training and Exercise (Het team - USAR.NL, z.d.). USAR members will need to be able to set up the vine robot, navigate it inside a collapsed building (if it is not fully autonomous), read data that the vine robot provides, and find survivors with the help of the vine robot. Furthermore, they will need to decide whether it is safe to follow the path of the vine robot to a survivor. Lastly, team members will need to retract the vine robot and reuse it if possible.  
The third group of users consists of members of the USAR teams that will have to work with the technology on site. It will be used alongside other techniques that are already used right now. USAR teams are multidisciplinary and not all members of the team will come in contact with the robot (e.g., nurses or doctors). In order to properly use robotics and technology, USAR members who execute the search and rescue operation will need training. For the Dutch USAR team, this training can be conducted by the Staff Officer Education, Training and Exercise (Het team - USAR.NL, z.d.). USAR members will need to be able to set up the technology, navigate it inside a collapsed building (if it is not fully autonomous), read data that it provides, and find survivors with the help of the technology. Furthermore, they will need to decide whether it is safe to follow the path to a survivor. Lastly, team members will need to retract the technology and reuse it if possible.  
 
At last, the victims of the earthquake that the vine robot will be used for to localize. They will not have any control over the vine robot but will come into contact with it. It is therefore important that they will not be scared of the robot and will try to defend themselves from it.  


===Users' needs===
===Users' needs===
Line 241: Line 233:


===Society===
===Society===
Society will benefit from vine robots as it will help USAR operations to localize victims and find a path within rubble to a victim. This will influence the time needed to search for survivors after earthquakes. This is important as the chances of surviving decrease with each passing day. Furthermore, replacing human rescuers or search dogs with a vine robot will put less danger on them. Lastly, the vine robot will be able to go further in the debris without endangering itself. So, the usage of vine robots will influence the number of people that can be saved.  
Society will benefit from technology and robotics as it will help USAR operations to localize victims and find a path within rubble to a victim. This will influence the time needed to search for survivors after earthquakes. This is important as the chances of surviving decrease with each passing day. Furthermore, replacing human rescuers or search dogs with a robot will put less danger on them. Lastly, robots will be able to go further in the debris without endangering humans or the technology itself. So, the usage of robots and technology will influence the number of people that can be saved.  


===Enterprise===
===Enterprise===
If the vine robot will be available for sale, the company behind it will have several interests. First, it will want to create a robot that can make a difference. The robot should help USAR operations with localizing survivors in the rubble. Second, the company will want to either make a profit or make enough money to break even. It will need money to invest back in the product to further improve the robot. For the company, it is important to take into account the guidelines of INSARAG as this institute will promote the usage of rescue robots in their global network.  
If the robot or technology will be available for sale, the company behind it will have several interests. First, it will want to create a robot that can make a difference. The robot should help USAR operations with localizing survivors in the rubble. Second, the company will want to either make a profit or make enough money to break even. It will need money to invest back in the product to further improve the robot. For the company, it is important to take into account the guidelines of INSARAG as this institute will promote the usage of rescue robots in their global network.  


==Specifications==
==Specifications==
Before identifying the solutions for navigation and localization, a clear list of specifications for the vine robot is given.  
Before identifying the solutions for navigation and localization, a clear list of specifications for the robot/technology is given.  


'''Must:'''
'''Must:'''
Line 276: Line 268:
*Be able to put out fires and melt in extreme heat
*Be able to put out fires and melt in extreme heat


Now that the specifications have been determined, the constraints of the robot can be established. This mainly focuses on the design of the robot, such as the material and size.
==Vine robot==
[[File:Vinebot.jpg|thumb|367x367px|Vine Robot]]
With the help of the state-of-the-art, the vine robot was chosen as the design. Vine robots are soft, small, lightweight, and flexible robots designed to navigate through tight spaces, making them ideal for searching collapsed buildings for survivors. These robots utilize air pressure, which expands them through the tip, making them move and grow into a worm-like robot. The idea of the vine robot was brought up in 2017, making it a new robotic design. There have been some attempts to move the vine robot in chosen directions using a so-called muscle. This includes 4 tubes (muscles) around the large tube (vine robot), each with their own air supply. This allows the vine robot to move in every direction, depending on the pressure of the muscles.
 
However, the current vine robots lack the ability to navigate and localize, which is a critical requirement for them to be used successfully in such missions. The following chapters provide a research analysis on the best possible sensors that can be used for localization, as well as a simulation to find out what algorithm works best for navigation.
 
Now that the vine robot was determined as robotic design, the constraints of the robot can be established. This is mainly a brief description of how the robot can be designed. A complete design and manufacturing plan for the robot has not been taken into account in the scope of this report.


'''Constraints:'''
'''Constraints:'''
Line 293: Line 291:
*'''Turn radius:''' In research, a vine robot was able to round a 90 degree turn (Coad et al., 2019). The optimal turn radius supplied in current literature is 20 centimeters (Auf der Maur et al., 2021).
*'''Turn radius:''' In research, a vine robot was able to round a 90 degree turn (Coad et al., 2019). The optimal turn radius supplied in current literature is 20 centimeters (Auf der Maur et al., 2021).


==Pathfinding and localization==
==Localization==


===Current Difficulties===
===Current Difficulties===
Line 308: Line 306:
In conclusion, localizing victims after earthquakes is a challenging task that requires extensive planning, coordination, and resources. The scale of the disaster, the nature of the terrain, the complexity of the affected infrastructure, and the timing of the earthquake can all pose significant challenges for rescue teams.  
In conclusion, localizing victims after earthquakes is a challenging task that requires extensive planning, coordination, and resources. The scale of the disaster, the nature of the terrain, the complexity of the affected infrastructure, and the timing of the earthquake can all pose significant challenges for rescue teams.  


===Localization for vine robots===
===Background of localization for vine robots===
[[File:3-512x512.png|thumb|Infrared sensor module for Arduino]]
[[File:3-512x512.png|thumb|Infrared sensor module for Arduino]]
Localization of survivors is a critical task in search and rescue operations in destroyed buildings. Vine robots can play an important role in this task by using their flexibility and agility to navigate through complex and unpredictable environment and locate survivors. Localization involves determining the position of the robot and the position of any survivors in the environment and can be achieved through a variety of techniques and strategies.
Localization of survivors is a critical task in search and rescue operations in destroyed buildings. Vine robots can play an important role in this task by using their flexibility and agility to navigate through complex and unpredictable environment and locate survivors. Localization involves determining the position of the robot and the position of any survivors in the environment and can be achieved through a variety of techniques and strategies.
Line 316: Line 314:
====Detecting of heat====
====Detecting of heat====
Vine robots can be equipped with a range of sensors that enable them to detect heat in their surroundings. Infrared cameras and thermal imaging systems are among the most commonly used sensors for detecting heat in robots.
Vine robots can be equipped with a range of sensors that enable them to detect heat in their surroundings. Infrared cameras and thermal imaging systems are among the most commonly used sensors for detecting heat in robots.
'''Infrared cameras'''


Infrared cameras work by detecting infrared radiation emitted by objects and converting it into visual representations that can be interpreted by the robot's control systems.  
Infrared cameras work by detecting infrared radiation emitted by objects and converting it into visual representations that can be interpreted by the robot's control systems.  
<br />
[[File:Thermal imaging.jpg|thumb|Thermal imaging with cold spots]]
[[File:Thermal imaging.jpg|thumb|Thermal imaging with cold spots]]


'''Thermal imaging sensors'''


Thermal imaging systems use a more advanced technology that can detect temperature changes with higher precision, which can enable vine robots to identify potential sources of heat and determine the location and movement of individuals in a given environment  
Thermal imaging systems use a more advanced technology that can detect temperature changes with higher precision, which can enable vine robots to identify potential sources of heat and determine the location and movement of individuals in a given environment  


'''Contact sensors'''


Other types of sensors that could be used in vine robots for detecting heat also include contact sensors or gas sensors.
Contact sensors can be used to detect heat sources that come into direct contact with the robot's sensors. For example, if a vine robot comes into contact with a hot object such as a stove or a piece of machinery, the heat from the object can be detected by the contact sensors.


Contact sensors can be used to detect heat sources that come into direct contact with the robot's sensors. For example, if a vine robot comes into contact with a hot object such as a stove or a piece of machinery, the heat from the object can be detected by the contact sensors.
'''Gas sensors'''


Gas sensors can be used to detect the presence of combustible gases such as methane or propane which can be an indicator of a potential fire or explosion
Gas sensors can be used to detect the presence of combustible gases such as methane or propane which can be an indicator of a potential fire or explosion
Line 334: Line 335:
Detecting sound is another critical capability for vine robots especially in search and rescue operations. Vine robots can be equipped with a range of sensors that enable them to detect and interpret sound waves in their surroundings. A common type sensor used to detect sound is a microphone. Microphones can be used to capture sound waves in the environment and convert them into electrical signal that can be interpreted by the robot's control systems or be communicated back to a human operator for analysis
Detecting sound is another critical capability for vine robots especially in search and rescue operations. Vine robots can be equipped with a range of sensors that enable them to detect and interpret sound waves in their surroundings. A common type sensor used to detect sound is a microphone. Microphones can be used to capture sound waves in the environment and convert them into electrical signal that can be interpreted by the robot's control systems or be communicated back to a human operator for analysis


=====Ultrasonic sensors=====
'''Ultrasonic sensors'''
 
Vine robots can also be equipped with ultrasonic sensors which enable them to detect sound waves that are beyond the range of human hearing. Ultrasonic sensors work by emitting high-frequency sound waves that bounce off objects in the environment and return to the sensor, producing an electrical signal that be interpreted by the robot's control systems.
Vine robots can also be equipped with ultrasonic sensors which enable them to detect sound waves that are beyond the range of human hearing. Ultrasonic sensors work by emitting high-frequency sound waves that bounce off objects in the environment and return to the sensor, producing an electrical signal that be interpreted by the robot's control systems.


Vibration sensors
'''Vibration sensors'''


In addition to ultrasonic sensors, vine robots can be equipped with vibration sensors, which can detect sound waves that are not audible to the human ear. Vibration sensors work by detecting the tiny vibrations in solid objects caused by sound waves passing through them. These vibrations are converted into electrical signals that can be interpreted by the robot's control systems.
In addition to ultrasonic sensors, vine robots can be equipped with vibration sensors, which can detect sound waves that are not audible to the human ear. Vibration sensors work by detecting the tiny vibrations in solid objects caused by sound waves passing through them. These vibrations are converted into electrical signals that can be interpreted by the robot's control systems.
Line 345: Line 347:
Detecting movement is another critical capability for vine robots, especially in search and rescue operations. Vine robots can be equipped with a range of sensors that enable them to detect and interpret movement in their surroundings.  
Detecting movement is another critical capability for vine robots, especially in search and rescue operations. Vine robots can be equipped with a range of sensors that enable them to detect and interpret movement in their surroundings.  


=====Cameras=====
'''Cameras'''
 
One common type of sensor used in vine robots for detecting movement is a camera. Cameras can be used to capture visual data from the environment and interpret it using computer vision algorithms to detect movement.
One common type of sensor used in vine robots for detecting movement is a camera. Cameras can be used to capture visual data from the environment and interpret it using computer vision algorithms to detect movement.


=====Motion sensors=====
'''Motion sensors'''
 
Another type of sensor used in vine robots for detecting movement is a motion sensors.
Another type of sensor used in vine robots for detecting movement is a motion sensors.


Line 355: Line 359:
There are 4 types of motion sensors being used in the industry:
There are 4 types of motion sensors being used in the industry:


#Passive infrared (PIR) sensors: These sensors detect changes in the level of infrared radiation (heat) in their field of view caused by moving objects.
#'''Passive infrared (PIR) sensors:''' These sensors detect changes in the level of infrared radiation (heat) in their field of view caused by moving objects.
#Ultrasonic sensors: These sensors emit high-frequency sound waves and measure the time it takes for the waves to bounce back after an object. If an object moves in front of the sensors, it will cause a change in the time it takes for the sound waves to bounce back, which triggers an response.
#'''Ultrasonic sensors:''' These sensors emit high-frequency sound waves and measure the time it takes for the waves to bounce back after an object. If an object moves in front of the sensors, it will cause a change in the time it takes for the sound waves to bounce back, which triggers an response.
#Microwave sensors: Similar to ultrasonic sensors. But these sensors emit electromagnetic waves instead.
#'''Microwave sensors:''' Similar to ultrasonic sensors. But these sensors emit electromagnetic waves instead.
#Vibration sensors: These sensors measure changes in acceleration caused by movement.
#'''Vibration sensors:''' These sensors measure changes in acceleration caused by movement.


<br />
===Evaluation of sensors in localization===
====Sensors in Localization====
The sensor types above have already explained the basics of detecting a human, where now an evaluation will be made, including other sensors.  
The sensor types above have already explained the basics of detecting a human, where now an evaluation will be made, including other sensors.  


Types of Sensors:  
'''Types of Sensors:'''


*Thermal Imaging Camera: This sensor can create an image of an object by using infrared radiation emitted from the object. It can detect the heat of a person in dark environments, making them efficient for search and rescue missions. They can identify the location of a person by detecting their body heat through walls or other obstructions (Engineering, 2022)
*Thermal Imaging Camera: This sensor can create an image of an object by using infrared radiation emitted from the object. It can detect the heat of a person in dark environments, making them efficient for search and rescue missions. They can identify the location of a person by detecting their body heat through walls or other obstructions (Engineering, 2022)
Line 371: Line 374:
*
*
*
*
*Chemical Sensor: This is a device that measures and detects chemical qualities in the air. It can be used to locate areas which may have hazardous materials, which pose risks to both the rescuers and survivors. (''What Is a Chemical Sensor?'', 2019)
*Light Detection and Ranging (LiDaR): These sensors uses light in the form of a pulsed laser in order to measure the distances tp objects. It can be used to create 3D maps of different environments and can detect movements, such as a survivor moving around debris. (''What Is Lidar?'', n.d.)
*Light Detection and Ranging (LiDaR): These sensors uses light in the form of a pulsed laser in order to measure the distances tp objects. It can be used to create 3D maps of different environments and can detect movements, such as a survivor moving around debris. (''What Is Lidar?'', n.d.)
*Global Positioning System (GPS): This is a satellite constellation, which provides accurate positioning and navigation measurements worldwide. They can track the movements of different people and can create maps of the environment to help rescuers locate survivors. (''What Is GPS and How Does It Work?'', n.d.)
*Global Positioning System (GPS): This is a satellite constellation, which provides accurate positioning and navigation measurements worldwide. They can track the movements of different people and can create maps of the environment to help rescuers locate survivors. (''What Is GPS and How Does It Work?'', n.d.)
*
 
These sensors are evaluated in the table below, which are ranked based on multiple sources. In order to get a better understanding outcomes, '''the results can be found in the appendix'''. The ranking criteria is as follows:
 
*Range: Maximum distance the sensor can detect an object
*Resolution: The smallest object the sensor can detect
*Accuracy: The precision of the measurements
*Cost: The cost of the sensor


{| class="wikitable"
{| class="wikitable"
Line 404: Line 412:
|Medium
|Medium
|2
|2
|-
|Chemical Sensor
|Very high
|High
|High
|Very high
|6
|-
|-
|Light Detection and Ranging (LiDaR)
|Light Detection and Ranging (LiDaR)
Line 420: Line 421:
|-
|-
|Global Positioning System (GPS)
|Global Positioning System (GPS)
|High
|Very High
|Medium
|Medium
|Medium
|Medium
|Medium
|Medium
|4
|4
|}
|}Based on the ranking, it was determined that thermal imaging cameras are the most effective sensors for localizing survivors under rubble. This is due to its ability to detect heat given off by a human body from a large range with very high accuracy. Additionally, this sensor can be used in dark environments, which some others cannot. Even if a survivor is buried under the rubble, their body will still be able to give off a heat signature that can be detected by a thermal imaging camera. However, one major limitation of only using this sensor is that no visual image of the surrounding environment can be provided. As a result, a visual camera can be used in conjunction with a thermal imaging camera, allowing rescuers to get a better picture of the situation by identifying potential obstacles and hazards.
Ranking Criteria:


*Range: Maximum distance the sensor can detect an object
==Pathfinding==
*Resolution: The smallest object the sensor can detect
Pathfinding is the process of finding the shortest or most efficient path between two points in a given environment. For a vine robot, pathfinding is critical as it allows the robot to navigate through the porous environment of destroyed buildings and reach its intended destination. Pathfinding algorithms are typically used to determine the best route for the robot to take based on factors such as obstacles, terrain and distance.
*Accuracy: The precision of the measurements
*Cost: The cost of the sensor


Based on the ranking, it was determined that thermal imaging cameras are the most effective sensors for localizing survivors under rubble. This is due to its ability to detect heat given off by a human body from a large range with very high accuracy. Additionally, this sensor can be used in dark environments, which some others cannot.  
One approach is to use a combination of reactive and deliberative pathfinding strategies. Reactive pathfinding involves using sensors to detect obstacles in real-time and making rapid adjustments to the robot's path to avoid them. This can be especially useful in environments where the obstacles and terrain are constantly changing such as in a destroyed building. However, reactive pathfinding algorithms are often less efficient than traditional pathfinding algorithms because they only consider local information and may not find the optimal path. Deliberative pathfinding on the other hand involves planning paths ahead of time based on a map or model of the environment. While this approach can be useful in some cases, it may not be practical in a destroyed building where the environment is constantly changing. Another approach is to use matching learning algorithms to train the robot to navigate through the environment based on real-world data. The latter approach is not the main focus of the paper and the combination is spoken about in further detail.


===Pathfinding for vine robots===
===Pathfinding for vine robots===
Pathfinding is the process of finding the shortest or most efficient path between two points in a given environment. For a vine robot, pathfinding is critical as it allows the robot to navigate through the porous environment of destroyed buildings and reach its intended destination. Pathfinding algorithms are typically used to determine the best route for the robot to take based on factors such as obstacles, terrain and distance.
As described above, sensors are one of the critical components in reactive and deliberative pathfinding. Sensors provide the real-time information that the algorithm needs to navigate the vine robot through a dynamic environment, avoid obstacles, and adapt to changing conditions. In the case of the vine robot system, this mechanism of reactive pathfinding will be done to localize the survivor as is described in a previous section of this paper. Rather this section of the paper focuses on deliberative pathfinding, which includes sensory information being read from a radar to determine a model of the environment that has detected some signs of life within a collapsed building. It is clear that these radars currently require a surface area far larger then that available on the vine robot itself. However, there are improvements to reducing the size of these radars to get them working on a drone. Current experiments seem promising, but aren't fully realized within the research community (example of a drone with a radar). When this becomes a viable option, attaching such a radar to the vine robot should be possible.


One approach is to use a combination of reactive and deliberative pathfinding strategies. Reactive pathfinding involves using sensors to detect obstacles in real-time and making rapid adjustments to the robot's path to avoid them. This can be especially useful in environments where the obstacles and terrain are constantly changing such as in a destroyed building. Deliberative pathfinding on the other hand involves planning paths ahead of time based on a map or model of the environment. While this approach can be useful in some cases, it may not be practical in a destroyed building where the environment is constantly changing.
Thus, a proposed solution for now is to have these large sensors in conjunction with the vine robot, thereby ensuring a higher chance of success to locate the survivor by having the rescue team first scan the environment and then set up the vine robot in the correct location. The built in a localization algorithm that includes path finding capabilities should then be able to correctly identify a survivor within the rubble. Although it is worth noting that this greatly increases the cost of the search and rescue mission as whole, it is important to keep in mind that this approach does not enhance the localization capabilities of the vine robot itself. The mechanisms of reactive pathfinding play a roll within the localization techniques of the vine robot itself. Therefore this is an extension that isn't necessary for the vine robot to work as intended but is able to enhance the success rate of finding survivors.


Another approach is to use matching learning algorithms to train the robot to navigate through the environment based on real-world data.<br />
===Ground Penetrating Radar (GPR)===
The underlying technology for GPR is electromagnetic propagation. Many technologies realize detection, but only electromagnetic ones assure best results in terms of speed and accuracy of a relief, working also in a noisy environment. Three methods can be used to detect the presence of a person in a given area through the use of electromagnetic propagation. The first method detects active or passive electronic devices carried by the person, while the second method detects the person's body as a perturbation of the backscattered electromagnetic wave. In both cases, a discontinuity in the medium of electromagnetic wave propagation is required, which means that trapped individuals must have some degree of freedom of movement.However, these methods are only effective in homogeneous mediums, which is not always the case in collapsed buildings where debris can have stratified slabs and heavy rubble at different angles. Therefore, the retrieval of the dielectric contrast features cannot be used as a means for detection in these situations.


Moreover, the irregular surface and instability of ruins make linear scans of radar at soil contact impossible. Instead, the sensor system must be located in a static position or on board of an aerial vehicle to detect the presence of individuals. (Ferrara V., 2015)


====Sensors and path planning====
====Vital Sign Detection====
Colas et al. (2013) present a path planning that works for a 3D terrain. The system makes use of exteroceptive sensors, it uses a front-mounted rolling laser scanner that can take full three-dimensional (3D) scans of its surrounding. "This system is based on point cloud data and does not attempt to fully reconstruct the environment, but instead uses lazy tensor voting to assess traversability." Tensor voting means that it extracts geometrical primitives and saliency by voting of points So far, this system has been implemented for static environments, in dynamic environments it still has challenges. Which raises the question if our robot should be able to path plan in a dynamic environment and if this is useful for our robot.
Vital signs are detected using a continuous wave (CW) signal is transmitted towards human subject; the movements of heartbeat and breathing modulate the phase of signal, generating a Doppler shift on the reflected signal, which back to the receiver; and finally the demodulation of the received signal detects vital signs. These radar systems can be compact in size, making it feasible to fit on unmanned aerial vehicles (UAVs). Unfortunately, vibration and aerodynamic instability of a flying vehicle, together limit accuracy and the validity of the same measure. Aside from this the frequency in which these operate greatly impacts the penetration depth of the sensing capability. Currently, GPR systems have an increased sensitivity level, due to the switch to use ultra-wide band radars. The increase in sensitivity does improve the probability of detection, but at the same time, it can produce a bigger number of false alarms. There is research with regards to these two systems, but both still need further improvement before being able to work in present day situations. (Ferrara V., 2015)
[[File:Sensor Research.png|thumb|'''Table 1:''' Sensor Research]]


When looking at the table provided, the use cases can be seen of how to detect people under debris with the current technologies and how to continue research into them. Ferrara V. (2015), explains everything in the paper regarding the shortcomings and benefits of each solution. This is a good start to figure out what technologies are needed to research. Provided below is some information about them, but each would need to be researched in depth. For the current project idea, one of the following would be enough to deepen in as the full research into one of these would already take too much time.
=====Constant false alarm rate (CFAR)=====
Airborne ground penetrating radar can effectively survey large areas that may not be easily accessible, making it a valuable tool for underground imaging. One method to address this imaging challenge is by utilizing a regularized inverse scattering procedure, known as CFAR. CFAR detection is an adaptive algorithm commonly used in radar systems to detect target returns amidst noise, clutter, and interference. Among the CFAR algorithms, the cell averaging (CA) CFAR is significant, as it estimates the mean background level by averaging the signal level in M neighboring range cells. The CA-CFAR algorithm has shown promising capabilities in achieving a stable solution and providing well-focused images. By using this technique, the actual location and size of buried anomalies can be inferred.(Ilaria C. et al, 2012)


'''Constant false alarm rate (CFAR)'''
=====Ultra-wideband (UWB)=====
 
The proposed method for vital sign detection is highly robust and suitable for detecting multiple trapped victims in challenging environments with low signal-to-noise and clutter ratios. The removal of clutter has been effective, and the method can extract multiple vital sign information accurately and automatically. These findings have significant implications for disaster rescue missions, as they have the potential to improve the effectiveness of rescue operations by providing reliable and precise vital sign detection. Future research should focus on extending the method to suppress or separate any remaining clutter and developing a real-time and robust approach to detect vital signs in even more complex environments. (Yanyun X. et al, 2013)
CFAR detection refers to a common form of adaptive algorithm used in radar systems to detect target returns against a background of noise, clutter and interference.
<br />
 
An important CFAR algorithm is the cell averaging (CA) CFAR, in which the mean background level is estimated by averaging the signal level in ''M'' neighboring range cells.
 
'''Self-injection-locked (SIL)'''
 
The SIL radar is operated at 433 MHz ISM band to achieve excellent penetration capability and coverage. Moreover, an additional phase shifter is utilized to eliminate the large frequency shift, which is caused by strong clutter signals and often causes the SIL mechanism to fail.
 
'''Ultra-wideband (UWB)'''
 
UWB has traditional applications in non-cooperative radar imaging. Most recent applications target sensor data collection, precise locating, and tracking. Ultra low-power radio-frequency identification (RFID) tag with precision localization is often the enabling technology for location-aware sensor applications. Impulse-Radio Ultra-Wideband (IR-UWB) is a promising technology to fulfill the usage requirements in indoor cluttered environment. This doesn't specifically mean that it will work well under interference or rubble for that matter, but it does show us some promising results.
==Simulation==
==Simulation==


===Goal===
===Goal===
Our group has decided to construct a simulation for the localization capabilities of the Vine Robot. For a Vine Robot to accurately determine the location of a target, the robot must be able to get in close proximity to that target. As such, the robot’s localization algorithm must include pathfinding capabilities. The simulation will be used to test out how a customized developed localization algorithm picks up stimuli from the environment and reacts to these stimuli to then try and locate the survivor from whom these stimuli originate. For the simulation to succeed, the robot is required to reach the survivor’s location in at least 75% of the randomly generated scenarios.  
To evaluate the localization capabilities of the Vine Robot, a simulation was created. For a Vine Robot to accurately determine the location of a target, the robot must be able to get in close proximity to that target. As such, the robot’s localization algorithm must include pathfinding capabilities. The simulation will be used to test out how a custom developed localization algorithm picks up stimuli from the environment and reacts to these stimuli to then try and locate the survivor from whom these stimuli originate. For the simulation to succeed, the robot is required to reach the survivor’s location in at least 75% of the randomly generated scenarios.  


The complexity of creating such a vine robot and its testing environment, due to the sheer amount of different possible scenarios and variables involved in finding survivors within debris, is currently outside of the scope of this course. However, it is believed that the noisy environment and the robot can be simulated to retain their essential properties. The simulated environment will be generated using clusters of debris, with random noise representing interference, and intermittent stimuli that indicate the presence of a survivor. The random noise and intermittent stimuli are critical in simulating the robot’s sensors, since the data received from the sensors are unreliable when going through debris.
The complexity of creating such a vine robot and its testing environment, due to the sheer amount of different possible scenarios and variables involved in finding survivors within debris, is currently outside of the scope of this course. However, it is believed that the noisy environment and the robot can be simulated to retain their essential properties. The simulated environment will be generated using clusters of debris, with random noise representing interference, and intermittent stimuli that indicate the presence of a survivor. The random noise and intermittent stimuli are critical in simulating the robot’s sensors, since the data received from the sensors are unreliable when going through debris.
Furthermore, the assumption for maneuverability of the Vine Robot within the simulation is that it has reached a level of precision similar to that of a snake. Which is something the current state of the art Vine Robots aren’t, but it is expected that this will be the standard level of maneuverability in the future.


===Specification===
===Specification===
[[File:Simulation environment.png|thumb|A screenshot of the simulation environment with all components visible.]]
The simulation is made using NetLogo 3D. This software has been chosen for its simplicity in representing the environment using patches and turtles.
The simulation is made using NetLogo 3D. This software has been chosen for its simplicity in representing the environment using patches and turtles.


The vine robot is represented as a turtle. For algorithms involving swarm robotics, multiple turtles can be used. The robot can only move forwards with a maximum speed of 10 centimetres per second and a turning radius of at least 20 centimeters. The robot cannot move onto patches it has visited before, because the vine robot would then hit itself. The robot has a viewing angle of at most 180 degrees. When involving swarm robotics it is not desired that they can cross each other either. Thus by saving and relaying its path, resources can be saved (vine robots don’t traverse the same path) and it can be prevented that cross or crash into each other.
The vine robot is represented as a turtle. For algorithms involving swarm robotics, multiple turtles can be used. The robot can only move forwards with a maximum speed of 10 centimetres per second and a turning radius of at least 20 centimetres. The robot cannot move onto patches it has visited before, because the vine robot would then hit itself. The robot has a viewing angle of at most 180 degrees. When involving swarm robotics, it is not allowed for the different robots to cross each other either.
 
The environment is randomly filled with large chunks of debris. These are represented by grey patches. The robot may not move onto these patches. To simulate the debris from collapsed buildings more closely, the debris patches are clustered together. All other patches contain smaller rubble, which the robot can move through.


The environment is randomly filled with large chunks of debris. These are represented by grey patches. The robot may not move onto these patches. To simulate the debris from collapsed buildings more closely, the debris patches are clustered together.
The environment also contains an adjustable number of survivors, which are represented by green patches. These survivors are stationary, as they are stuck underneath the rubble. Each survivor gives off heat in the form of stimuli, which the robot can pick up on within its field of view. The stimuli are represented by turtles that move away from the corresponding survivor and die after a few ticks. The survivor itself can also be detected when within the robot’s field of view. Once the robot has reached a survivor, the user is notified and the simulation is stopped.


The environment also contains a survivor, which is represented by a stationary turtle. When within an adjustable range of the survivor, the robot can pick up on a sign of life. Once this happens, the robot can determine the direction of the source, but it cannot know the exact location until the target is reached. The algorithm used for getting to the target is of particular interest in this simulation. Once the robot has reached the survivor, the user is notified and the simulation is stopped.
Red patches are used to simulate random noise and false positives. These red patches send out stimuli in the same way as the green patches do. The robot cannot distinguish these stimuli from one another, but it can identify red patches as false positives when they are in its field of view.


===Design choices===
===Design choices===
Line 487: Line 478:
To properly represent the robot’s physical capabilities and limitations, the simulation environment requires a sense of scale. For simplicity, patches were given a dimension of 10 centimetres cubed and each tick represents 1 second. As a result, any patch that is not marked as a debris chunk patch has an opening that is larger than the robot’s main body tube with a diameter of 5 centimetres. For the robot’s mobility, it entails that a single patch can be traversed per tick and that sideways movement is limited to a 45-degree angle per tick.
To properly represent the robot’s physical capabilities and limitations, the simulation environment requires a sense of scale. For simplicity, patches were given a dimension of 10 centimetres cubed and each tick represents 1 second. As a result, any patch that is not marked as a debris chunk patch has an opening that is larger than the robot’s main body tube with a diameter of 5 centimetres. For the robot’s mobility, it entails that a single patch can be traversed per tick and that sideways movement is limited to a 45-degree angle per tick.


One might argue that the robot can turn with a tighter radius one pressed up against a chunk of debris. In theory, this would function by trying to turn in a certain direction while pressing up against the debris. The forwards movement would then be blocked by the debris, while the sideways movement can continue unrestricted. However, this theory has a major downside. Deliberately applying pressure to a chunk of stationary debris could cause it, or the surrounding debris, to shift. Such a shift has unpredictable side effects, which could have disastrous consequences for survivors stuck in the debris.
One might argue that the robot can turn with a tighter radius when pressed up against a chunk of debris. In theory, this would function by trying to turn in a certain direction while pressing up against the debris. The forwards movement would then be blocked by the debris, while the sideways movement can continue unrestricted. However, this theory has a major downside. Deliberately applying pressure to a chunk of stationary debris could cause it, or the surrounding debris, to shift. Such a shift has unpredictable side effects, which could have disastrous consequences for survivors stuck in the debris.


====Simulation Scenario====
===Simulation Scenario===
In active use the robot has to guide rescuers to possible locations of survivors in the environment. This scenario can be described in PEAS as:
In active use the robot has to guide rescuers to possible locations of survivors in the environment. This scenario can be described in PEAS as:


Line 504: Line 495:
*Collaborative
*Collaborative
*Single-agent
*Single-agent
*Dynamic?
*Dynamic
*Sequential?<br />
*Sequential<br />


====Test Plan====
===Test Plan===
The objective of the test plan is to evaluate the localization algorithm of the Vine Robot in a simulated environment with clusters of debris, random noise representing interference, and intermittent stimuli that indicate the presence of a survivor. The test aims to ensure the Vine Robot can locate a survivor in at least 75% of the randomly generated scenarios.
The objective of the test plan is to evaluate the localization algorithm of the Vine Robot in a simulated environment with clusters of debris, random noise representing interference, and intermittent stimuli that indicate the presence of a survivor. The test aims to ensure the Vine Robot can locate a survivor in at least 75% of the randomly generated scenarios.


Line 535: Line 526:


====Test Metrics:====
====Test Metrics:====
The results found will be used to evaluate the Vine Robot's localization algorithm. The percentage of scenarios where the Vine Robot successfully locates the survivor will be given a grade between 1-10. Then each grade will be multiplied by its percentage of importance. Allowing us to have a better representation of the 75% success rate for the Vine Robot, thus the aim is to get a 7.5 average grade.  
The results found will be used to evaluate the Vine Robot's localization algorithm. The percentage of scenarios where the Vine Robot successfully locates the survivor will be given a grade between 1-10. Then each grade will be multiplied by its percentage of importance. Allowing us to have a better representation of the 75% success rate for the Vine Robot, thus the aim is to get a 7.5 average grade
 
====Test results:====
Below are some of the results shown for Scenario 3. The other results are in the appendix.
{| class="wikitable"
|+
! colspan="4" |Scenario 3
|-
! colspan="4" |lifespan of stimuli 50
|-
|Run iteration
|Success/Failure
|Reason for Failure / Blocks discovered
|Ticks
|-
|#1
|Success
|33/50 blocks discovered
|509
|-
|#2
|Failure
|Wall hit / 15/50 blocks discovered
|228
|-
|#3
|Failure
|Wall hit / 6/50 blocks discovered
|74
|-
|#4
|Success
|1/50 blocks discovered
|14
|-
|#5
|Success
|9/50 blocks discovered
|108
|-
|#6
|Failure
|Wall hit / 24/50 blocks discovered
|167
|-
|#7
|Success
|10/50 blocks discovered
|56
|-
|#8
|Success
|12/50 blocks discovered
|149
|-
|#9
|Failure
|Wall hit / 8/50 blocks discovered
|51
|-
|#10
|Failure
|Dead end / 8/50 blocks discovered
|80
|-
! colspan="4" |lifespan of stimuli 100
|-
|#1
|Success
|4/50 blocks discovered
|11
|-
|#2
|Success
|19/50 blocks discovered
|278
|-
|#3
|Success
|
|6
|-
|#4
|Success
|7/50 blocks discovered
|94
|-
|#5
|Success
|3/50 blocks discovered
|22
|-
|#6
|Success
|16/50 blocks discovered
|105
|-
|#7
|Failure
|Wall hit / 5/50 blocks discovereds
|14
|-
|#8
|Failure
|Dead end / 10/50 blocks discovered
|106
|-
|#9
|Success
|8/50 blocks discovered
|39
|-
|#10
|Failure
|Wall hit / 3/50  blocks discovered
|3
|-
|
|
|Average ticks
|72
|}
<br />
 
====Test conclusion====
The simulation algorithm used to model the vine robot's behavior in a disaster response scenario involved a combination of pathfinding and survivor detection algorithms. The pathfinding algorithm utilized a search-based approach to navigate through the debris clusters and identify potential paths for the robot to follow. The survivor detection algorithm relied on the robot's sensors to detect signs of life, such as heat. However, while the simulation algorithm provided a useful framework for testing the vine robot's capabilities, it also had several limitations that could have affected the accuracy of the simulation results. For example, the algorithm did not account for potential variations in the environment, such as  the presence of potential dead ends . This led to inaccuracies in the robot's pathfinding decisions causing it to get stuck quite often.
 
Another potential shortcoming of the simulation algorithm was its reliance on pre-determined search strategies and decision-making processes. While the algorithm was designed to identify the most efficient paths through the debris clusters and locate survivors based on sensor data, it did not account for potential variations in the environment or unforeseen obstacles that could impact the robot's performance. This could have limited the algorithm's ability to adapt to changing conditions and make optimal decisions in real-time.
 
Finally, the simulation algorithm may have also been limited by the accuracy and precision of the sensor data used to model the robot's behavior we decided to simulate. While the algorithm tried emulating advanced sensor technology to detect survivors and navigate through the debris clusters, the accuracy and precision of these sensors could have been misaligned due to the limitations of Netlogo . This could have led to inaccuracies in the simulation results and limited the algorithm's ability to accurately model the vine robot's behavior in a real-world disaster response scenario.


==Conclusion==
==Conclusion==
...
Overall, the vine robot shows potential for urban search and rescue operations. From the user investigation, it was clear that the technology should be semi-autonomous and not too expensive regarding material and training. Furthermore, it was investigated that the vine robot could be equipped with a thermal imaging camera and a visual camera in order to localize survivors in rubble. A simulation was used to test a localization algorithm due to the time and budget constraints. However, a prototype of the vine robot could have given more exact insights into the challenges regarding the algorithm. The results of the simulation tests show that the success rate of the customized algorithm is not high enough for the algorithm to be accepted as the average grade is below 7.5. Therefore, the algorithm will need to be optimized and tested again before it can be used in real-life scenarios. The algorithm should include the potential variations of the environment. Further research, especially in partnership with the users, is necessary for the vine robot to be implemented in USAR operations.  


==Future work==
==Future work==
...
First of all, the customized localization algorithm will need to be improved. The simulation as it is now is fully autonomous. However, the user investigation pointed out that it should be semi-autonomous since the current technology that is fully autonomous can not handle the chaotic environment.  
 
Secondly, if the robot would be used semi-autonomously in rescue operations, USAR members should be trained. This training will learn members how to set up the device, how to read data from the robot, and how to control it if necessary. For instance, if the robot senses a stimulus that is not a human being, the human operator could steer the robot in another direction. It should be looked into how this training will look like, who will need it, and who will give it.
 
Thirdly, when the vine robot has found a survivor in rubble, it has to communicate in some way the exact location to the human operator. This will be done via the program that the operator uses to control it semi-autonomously.
 
Furthermore, it was found that the vine robot could be equipped with a thermal imagining camera and a visual camera. However, it was not investigated how these sensors could be mounted on the robot. Therefore, a detailed design and manufacturing plan should be made.
 
Lastly, it is not investigated how the robot can be retracted if used in rubble. Before it is deployed in USAR missions, it should be researched whether the robot can be retracted without damage to the sensors of the robot. In addition, it should be explored how many times the robot can be reused.
 
Overall, if the customized algorithm is found to pass the success rate, and the vine robot is equipped with the right sensors according to a design plan, the robot should be revised by USAR teams and the INSARAG for deployment in disaster sites.
 
==References==
Ackerman, E. (2023i). Boston Dynamics’ Spot Is Helping Chernobyl Move Towards Safe Decommissioning. IEEE Spectrum. <nowiki>https://spectrum.ieee.org/boston-dynamics-spot-chernobyl</nowiki>
 
Agarwal, T. (2019, July 25). ''Vibration Sensor: Working, Types and Applications''. ElProCus - Electronic Projects for Engineering Students. <nowiki>https://www.elprocus.com/vibration-sensor-working-and-applications/#:~:text=The%20vibration%20sensor%20is%20also,changing%20to%20an%20electrical%20charge</nowiki>
 
Agarwal, T. (2019, August 16). ''Sound Sensor: Working, Pin Configuration and Its Applications''. ElProCus - Electronic Projects for Engineering Students. <nowiki>https://www.elprocus.com/sound-sensor-working-and-its-applications/</nowiki>
 
Agarwal, T. (2022, May 23). ''Different Types of Motion Sensors And How They Work''. ElProCus - Electronic Projects for Engineering Students. <nowiki>https://www.elprocus.com/working-of-different-types-of-motion-sensors/</nowiki>
 
AJLabs. (2023). Infographic: How big were the earthquakes in Turkey, Syria? Earthquakes News | Al Jazeera. https://www.aljazeera.com/news/2023/2/8/infographic-how-big-were-the-earthquakes-in-turkey-syria
 
Al-Naji, A., Perera, A. G., Mohammed, S. L., & Chahl, J. (2019). Life Signs Detector Using a Drone in Disaster Zones. Remote Sensing, 11(20), 2441. <nowiki>https://doi.org/10.3390/rs11202441</nowiki>
 
Ambe, Y., Yamamoto, T., Kojima, S., Takane, E., Tadakuma, K., Konyo, M., & Tadokoro, S. (2016). Use of active scope camera in the Kumamoto Earthquake to investigate collapsed houses. International Symposium on Safety, Security, and Rescue Robotics. <nowiki>https://doi.org/10.1109/ssrr.2016.7784272</nowiki>
 
Analytika (n.d.). ''High quality scientific equipment.'' <nowiki>https://www.analytika.gr/en/product-categories-46075/metrites-anichneftes-aerion-nh3-co-co2-ch4-hs-nox-46174/portable-voc-gas-detector-measure-range-0-10ppm-resolution-0-01ppm-73181_73181/#:~:text=VOC%20gas%20detector%20.-,Measure%20range%3A%200%2D10ppm%20.,Resolution%3A%200.01ppm</nowiki>
 
Anthes, Gary. Robots Gear Up for Disaster Response. Communications of the ACM (2010): 15, 16. Web. 10 Oct. 2012
 
''Assembly''. (n.d.). <nowiki>https://www.assemblymag.com/articles/85378-sensing-with-sound#:~:text=These%20sensors%20provide%20excellent%20long,waves%20cannot%20be%20accurately%20detected</nowiki>.
 
Auf der Maur, P., Djambazi, B., Haberthür, Y., Hörmann, P., Kübler, A., Lustenberger, M., Sigrist, S., Vigen, O., Förster, J., Achermann, F., Hampp, E., Katzschmann, R. K., & Siegwart, R. (2021). RoBoa: Construction and Evaluation of a Steerable Vine Robot for Search and Rescue Applications. 2021 IEEE 4th International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 2021, pp. 15-20, doi: 10.1109/RoboSoft51838.2021.9479192.
 
Baker Engineering and Risk Consultants. (n.d.). ''Gas Detection in Process Industry – The Practical Approach''. Gas Detection in Process Industry – the Practical Approach. <nowiki>https://www.icheme.org/media/8567/xxv-poster-07.pdf</nowiki>
 
Bangalkar, Y. V., & Kharad, S. M. (2015). Review Paper on Search and Rescue Robot for Victims of Earthquake and Natural Calamities. International Journal on Recent and Innovation Trends in Computing and Communication, 3(4), 2037-2040.
 
Blines. (2023). ''Project USAR''. Reddit. <nowiki>https://www.reddit.com/r/Urbansearchandrescue/comments/11lvoms/project_usar/</nowiki>
 
Blumenschein, L. H., Coad M. M., Haggerty D. A., Okamura A. M., & Hawkes E. W. (2020). Design, Modeling, Control, and Application of Everting Vine Robots. https://doi.org/10.3389/frobt.2020.548266
 
Boston Dynamics, Inc. (2019). Robotically negotiating stairs (Patent Nr. 11,548,151). Justia. <nowiki>https://patents.justia.com/patent/11548151</nowiki>
 
''Camera Resolution and Range''. (n.d.). <nowiki>https://www.flir.com/discover/marine/technologies/resolution/#:~:text=Most%20thermal%20cameras%20can%20see,640%20x%20480%20thermal%20resolution</nowiki>.
 
CO2 Meter. (2022, September 20). How to Measure Carbon Dioxide (CO2), Range, Accuracy, and Precision. <nowiki>https://www.co2meter.com/blogs/news/how-to-measure-carbon-dioxide#:~:text=This%200.01%25%20(100%20ppm),around%2050%20ppm%20(0.005%25)</nowiki>
 
Coad, M. M., Blumenschein, L. H., Cutler, S., Zepeda, J. A. R., Naclerio, N. D., El-Hussieny, H., ... & Okamura, A. M. (2019). Vine Robots. IEEE Robotics & Automation Magazine, 27(3), 120-132.
 
Coad, M. M., Blumenschein, L. H., Cutler, S., Zepeda, J. A. R., Naclerio, N. D., El-Hussieny, H., Mehmood, U., Ryu, J., Hawkes, E. W., & Okamura, A. M. (2020). Vine Robots: Design, Teleoperation, and Deployment for Navigation and Exploration. https://arxiv.org/pdf/1903.00069.pdf
 
Da Hu, Shuai Li, Junjie Chen, Vineet R. Kamat,Detecting, locating, and characterizing voids in disaster rubble for search and rescue, Advanced Engineering Informatics, Volume 42,2019,100974,ISSN 1474-0346, <nowiki>https://doi.org/10.1016/j.aei.2019.100974</nowiki>
 
De Cubber, G., Doroftei, D., Serrano, D., Chintamani, K., Sabino, R., & Ourevitch, S. (2013, October). The EU-ICARUS project: developing assistive robotic tools for search and rescue operations. In 2013 IEEE international symposium on safety, security, and rescue robotics (SSRR) (pp. 1-4). IEEE.
 
Delmerico, J., Mintchev, S., Giusti, A., Gromov, B., Melo, K., Horvat, T., Cadena, C., Hutter, M., Ijspeert, A., Floreano, D., Gambardella, L. M., Siegwart, R., & Scaramuzza, D. (2019). The current state and future outlook of rescue robotics. Journal of Field Robotics, 36(7), 1171–1191. <nowiki>https://doi.org/10.1002/rob.21887</nowiki>
 
Engineering, O. (2022a, October 14). Thermal Imaging Camera. <nowiki>https://www.omega.com/en-us/</nowiki>. <nowiki>https://www.omega.com/en-us/resources/thermal-imagers</nowiki>
 
F. Colas, S. Mahesh, F. Pomerleau, M. Liu and R. Siegwart, "3D path planning and execution for search and rescue ground robots," 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 2013, pp. 722-727, doi: 10.1109/IROS.2013.6696431. This paper presents a pathplanning system for a static 3D environment, with the use oflazy tensor voting.
 
Foxtrot841. (2023).''Technology in SAR''. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/
 
''GPS.gov: Space Segment''. (n.d.). <nowiki>https://www.gps.gov/systems/gps/space/</nowiki>
 
Hampson, M. (2022). Detecting Earthquake Victims Through Walls. IEEE Spectrum. <nowiki>https://spectrum.ieee.org/dopppler-radar-detects-breath</nowiki>
 
Hatazaki, K., Konyo, M., Isaki, K., Tadokoro, S., and Takemura, F. (2007)  Active scope camera for urban search and rescue IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 2007, pp. 2596-2602, doi: 10.1109/IROS.2007.4399386
 
Hensen, M. (2023, April 5). ''GPS Tracking Device Price List''. Tracking System Direct. <nowiki>https://www.trackingsystemdirect.com/gps-tracking-device-price-list/</nowiki>
 
Huamanchahua, D., Aubert, K., Rivas, M., Guerrero, E. L., Kodaka, L., & Guevara, D. C. (2022). Land-Mobile Robots for Rescue and Search: A Technological and Systematic Review. 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). <nowiki>https://doi.org/10.1109/iemtronics55184.2022.9795829</nowiki>
 
Instruments, M. (2021, May 16). What are Gas Detectors and How are They Used in Various Industries? MRU Instruments - Emissions Analyzers. <nowiki>https://mru-instruments.com/what-are-gas-detectors-and-how-are-they-used-in-various-industries/</nowiki>
 
J. (n.d.-b). ''What is the difference between the Sound Level Sensor and the Sound Level Meter? - Technical Information Library''. Technical Information Library. <nowiki>https://www.vernier.com/til/3486</nowiki>
 
Kamezaki, M., Ishii, H., Ishida, T., Seki, M., Ichiryu, K., Kobayashi, Y., Hashimoto, K., Sugano, S., Takanishi, A., Fujie, M. G., Hashimoto, S., & Yamakawa, H. (2016). Design of four-arm four-crawler disaster response robot OCTOPUS. International Conference on Robotics and Automation. <nowiki>https://doi.org/10.1109/icra.2016.7487447</nowiki>
 
Kawatsuma, S., Fukushima, M., & Okada, T. (2013). Emergency response by robots to Fukushima-Daiichi accident: summary and lessons learned. Journal of Field Robotics, 30(1), 44-63. doi: 10.1002/rob.21416
 
Kruijff, G. M., Kruijff-Korbayová, I., Keshavdas, S., Larochelle, B., Janíček, M., Colas, F., Liu, M., Pomerleau, F., Siegwart, R., N., Looije, R., Smets, N. J. J. M., Mioch, T., Van Diggelen, J., Pirri, F., Gianni, M., Ferri, F., Menna, M., Worst, R., . . . Hlaváč, V. (2014). Designing, developing, and deploying systems to support human–robot teams in disaster response. Advanced Robotics, 28(23), 1547–1570. <nowiki>https://doi.org/10.1080/01691864.2014.985335</nowiki>
 
Lee, S., Har, D., & Kum, D. (2016). Drone-Assisted Disaster Management: Finding Victims via Infrared Camera and Lidar Sensor Fusion. 2016 3rd Asia-Pacific World Congress on Computer Science and Engineering (APWC on CSE). <nowiki>https://doi.org/10.1109/apwc-on-cse.2016.025</nowiki>
 
Li, F., Hou, S., Bu, C., & Qu, B. (2022). Rescue Robots for the Urban Earthquake Environment. Disaster Medicine and Public Health Preparedness, 17. <nowiki>https://doi.org/10.1017/dmp.2022.98</nowiki>
 
Lindqvist, B., Karlsson, S., Koval, A., Tevetzidis, I., Haluška, J., Kanellakis, C., Agha-mohammadi, A. A., & Nikolakopoulos, G. (2022). Multimodality robotic systems: Integrated combined legged-aerial mobility for subterranean search-and-rescue. Robotics and Autonomous Systems, 154, 104134. <nowiki>https://doi.org/10.1016/j.robot.2022.104134</nowiki>
 
Liu, Y., Nejat, G. Robotic Urban Search and Rescue: A Survey from the Control Perspective. J Intell Robot Syst 72, 147–165 (2013). <nowiki>https://doi.org/10.1007/s10846-013-9822-x</nowiki>
 
Lyu, Y., Bai, L., Elhousni, M., & Huang, X. (2019). An Interactive LiDAR to Camera Calibration. ''ArXiv (Cornell University)''. <nowiki>https://doi.org/10.1109/hpec.2019.8916441</nowiki>
 
ManOfDiscovery. (2023).''Technology in SAR''. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/
 
Matsuno, F., Sato, N., Kon, K., Igarashi, H., Kimura, T., Murphy, R. (2014). Utilization of Robot Systems in Disaster Sites of the Great Eastern Japan Earthquake. In: Yoshida, K., Tadokoro, S. (eds) Field and Service Robotics. Springer Tracts in Advanced Robotics, vol 92. Springer, Berlin, Heidelberg. <nowiki>https://doi.org/10.1007/978-3-642-40686-7_</nowiki>
 
Murphy, R. R. (2017). Disaster Robotics. Amsterdam University Press
 
Musikhaus Thomann, (n.d.). ''Behringer ECM8000''. <nowiki>https://www.thomann.de/gr/behringer_ecm_8000.htm?glp=1&gclid=Cj0KCQjwxMmhBhDJARIsANFGOSsIyaQUtVOUtdpO6YneKcSOCqkvtbBp3neddsfc7hhylTgvQNprkJcaAvWmEALw_wcB</nowiki>
 
Nosirov, K. K., Shakhobiddinov, A. S., Arabboev, M., Begmatov, S., and Togaev, O.T. (2020) "Specially Designed Multi-Functional Search And Rescue Robot," Bulletin of TUIT: Managementand Communication Technologies: Vol. 2 , Article 1. <nowiki>https://doi.org/10.51348/tuitmct211</nowiki>
 
Osumi, H. (2014). Application of robot technologies to the disaster sites. Report of JSME Research Committee on the Great East Japan Earthquake Disaster, 58-74
 
Park, S., Oh, Y., & Hong, D. (2017). Disaster response and recovery from the perspective of robotics. International Journal of Precision Engineering and Manufacturing, 18(10), 1475–1482. <nowiki>https://doi.org/10.1007/s12541-017-0175-4</nowiki>
 
PCE Instruments UK: Test Instruments. (2023, April 9). ''- Sound Sensor | PCE Instruments''. <nowiki>https://www.pce-instruments.com/english/control-systems/sensor/sound-sensor-kat_158575.htm</nowiki>
 
Raibert, M. H. (2000). Legged Robots That Balance. MIT Press
 
Sanfilippo F, Azpiazu J, Marafioti G, Transeth AA, Stavdahl Ø, Liljebäck P. Perception-Driven Obstacle-Aided Locomotion for Snake Robots: The State of the Art, Challenges and Possibilities †. Applied Sciences. 2017; 7(4):336. <nowiki>https://doi.org/10.3390/app7040336</nowiki>
 
Seitron. (n.d.). ''Portable gas detector with rechargeable battery''. <nowiki>https://seitron.com/en/portable-gas-detector-with-rechargeable-battery.html</nowiki>
 
Shop - Wiseome Mini LiDAR Supplier. (n.d.). Wiseome Mini LiDAR Supplier. <nowiki>https://www.wiseomeinc.com/shop</nowiki>
 
Tadokoro, S. (Ed.). (2009). Rescue robotics: DDT project on robots and systems for urban search and rescue. Springer Science & Business Media.
 
Tai, Y.; Yu, T.-T. Using Smartphones to Locate Trapped Victims in Disasters. Sensors 2022, 22, 7502. <nowiki>https://doi.org/10.3390/</nowiki> s22197502
 
Teledyne FLIR. (2016, April 12). Infrared Camera Accuracy and Uncertainty in Plain Language. <nowiki>https://www.flir.eu/discover/rd-science/infrared-camera-accuracy-and-uncertainty-in-plain-language/</nowiki>
 
Tenreiro Machado, J. A., & Silva, M. F. (2006). An Overview of Legged Robots
 
Texas Instruments. (n.d.). ''An Introduction to Automotive LIDAR''. <nowiki>https://www.ti.com/lit/slyy150#:~:text=LIDAR%20and%20radar%20systems%20can,%3A%20%E2%80%A2%20Short%2Drange%20radar</nowiki>.
 
''Thermal Zoom Cameras''. (n.d.). InfraTec Thermography Knowledge. <nowiki>https://www.infratec.eu/thermography/service-support/glossary/thermal-zoom-cameras/#:~:text=Depending%20on%20the%20camera%20configuration,and%20aircraft%20beyond%2030%20km</nowiki>.
 
Uddin, Z., & Islam, M. (2016). Search and rescue system for alive human detection by semi-autonomous mobile rescue robot. 2016 International Conference on Innovations in Science, Engineering and Technology (ICISET). <nowiki>https://doi.org/10.1109/iciset.2016.7856489</nowiki>
 
Urban Search and Rescue Team. (2023). ''Update zaterdagmiddag''. USAR. <nowiki>https://www.usar.nl/</nowiki>
 
Van Diggelen, F., & Enge, P. (2015). The World’s first GPS MOOC and Worldwide Laboratory using Smartphones. Proceedings of the 28th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2015), 361–369
 
VectorNav. (2023). Lidar - A measurement technique that uses light emitted from a sensor to measure the range to a target object. <nowiki>https://www.vectornav.com/applications/lidar-mapping#:~:text=LiDAR%20sensors%20are%20able%20to,sensing%20tool%20for%20mobile%20mapping</nowiki>
 
Wang, B. (2017, august). ''New inexpensive centimeter-accurate GPS system could transform mainstream applications | NextBigFuture.com''. NextBigFuture.com. <nowiki>https://www.nextbigfuture.com/2015/05/new-inexpensive-centimeter-accurate-gps.html</nowiki>
 
WellHealthWorks (2022, December 11). ''Thermal Imaging Camera Price''. <nowiki>https://wellhealthworks.com/thermal-imaging-camera-price-and-everything-you-need-to-know/#:~:text=Battery%20life%20of%2010%20hours</nowiki>
 
''What is GPS and how does it work?'' (n.d.). <nowiki>https://novatel.com/support/knowledge-and-learning/what-is-gps-gnss</nowiki>
 
''What is lidar?'' (n.d.). <nowiki>https://oceanservice.noaa.gov/facts/lidar.html</nowiki>
 
WinnerNot_aloser. (2023).''Technology in SAR''. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/
 
Zhao, L., Sun, G., Li, W., & Zhang, H. (2016). The design of telescopic universal joint for earthquake rescue robot. 2016 Asia-Pacific Conference on Intelligent Robot Systems (ACIRS). <nowiki>https://doi.org/10.1109/acirs.2016.7556189</nowiki>


*
Zepeda, J. A. R. (2022). The 1-minute vine robot. Vine Robots. https://www.vinerobots.org/build-one/the-simple-vine-robot/
 
Zook_Jo. (2023).''Technology in SAR''. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/
 
Zotomayor, C., (2021). This Vine Robot Is an Unstoppable Force With Tons of Applications. https://www.solidsmack.com/design/vine-robot/
 
==Appendix==


==Project planning and deliverables==
===Project planning and deliverables===
{| class="wikitable"
{| class="wikitable"
!Week
!Week
Line 575: Line 850:
|}
|}


===Who is doing what?===
====Who is doing what?====
{| class="wikitable"
{| class="wikitable"
!Names
!Names
Line 599: Line 874:
|}
|}


==Weekly breakdowns==
===Weekly breakdowns===
{| class="wikitable"
{| class="wikitable"
!Name
!Name
Line 610: Line 885:
|-
|-
|Richard Farla
|Richard Farla
|4h
|5h
|Brainstorm session (1h), meeting (1h), literature research (1h), milestones (1h)
|Course introduction (1h), Brainstorm session (1h), meeting (1h), literature research (1h), milestones (1h)
|-
|-
|Yash Israni
|Yash Israni
Line 618: Line 893:
|-
|-
|Tessa de Jong
|Tessa de Jong
|4h
|5h
|Brainstorm session (1h), meeting (1h), problem statement (1h), literature research (1h)
|Course introduction (1h), Brainstorm session (1h), meeting (1h), problem statement (1h), literature research (1h)
|-
|-
|Kaj Scholer
|Kaj Scholer
|4h
|5h
|Brainstorm session (1h), meeting (1h), milestones (1h), literature research (1h)
|Course introduction (1h), Brainstorm session (1h), meeting (1h), milestones (1h), literature research (1h)
|-
|-
|Pepijn Tennebroek
|Pepijn Tennebroek
|4h
|5h
|Brainstorm session (1h), meeting (1h), problem statement (1h), literature research (1h)
|Course introduction (1h), Brainstorm session (1h), meeting (1h), problem statement (1h), literature research (1h)
|}
|}
{| class="wikitable"
{| class="wikitable"
Line 780: Line 1,055:
|-
|-
|Clinton Emok
|Clinton Emok
|
|10h
|Meeting 1 (3h), Meeting 2 (3h)
|Meeting 1 (3h), Meeting 2 (3h), Executing test plan (4h)
|-
|-
|Richard Farla
|Richard Farla
|
|11.5h
|Meeting 1 (3h), Simulation specification/design choices (1h), Simulation: vision (1h), Meeting 2 (3h), Simulation: testing (0.5h), Presentation (3h)
|Meeting 1 (3h), Simulation specification/design choices (1h), Simulation: vision (1h), Meeting 2 (3h), Simulation: testing (0.5h), Presentation (3h)
|-
|-
|Yash Israni
|Yash Israni
|
|6h
|Meeting 1 (3h), Meeting 2 (3h)
|Meeting 1 (3h), Meeting 2 (3h)
|-
|-
|Tessa de Jong
|Tessa de Jong
|
|9h
|Meeting 1 (3h), Presentation (2h), Wiki (1h), Meeting 2 (3h)
|Meeting 1 (3h), Presentation (2h), Wiki (1h), Meeting 2 (3h)
|-
|-
|Kaj Scholer
|Kaj Scholer
|
|12h
|Meeting 1 (3h), Presentation Preparation (4h), Meeting 2 (3h), Sensors (2h)
|Meeting 1 (3h), Presentation Preparation (4h), Meeting 2 (3h), Sensors (2h)
|-
|-
|Pepijn Tennebroek
|Pepijn Tennebroek
|
|8h
|Meeting 1 (3h), Presentation (2h), Meeting 2 (3h)
|Meeting 1 (3h), Presentation (2h), Meeting 2 (3h)
|}
|}
==References==
{| class="wikitable"
Ackerman, E. (2023i). Boston Dynamics’ Spot Is Helping Chernobyl Move Towards Safe Decommissioning. IEEE Spectrum. <nowiki>https://spectrum.ieee.org/boston-dynamics-spot-chernobyl</nowiki>
!Name
!Total
!Breakdown week 8
|-
|Clinton Emok
|6h
|Presentation (2h), Meeting 1 (2h), Test conclusion (1h), Miscellenaous(1h)
|-
|Richard Farla
|6h
|Presentation (2h), Meeting 1 (2h), Simulation (2h)
|-
|Yash Israni
|4h
|Presentation (2h), Meeting 1 (2h)
|-
|Tessa de Jong
|8.5h
|Presentation (2h), Meeting 1 (2h), Conclusion and future work (2.5h), Introduction and Users (2h)
|-
|Kaj Scholer
|7h
|Presentation (2h), Meeting 1 (2h), Vine Robot + Localization (3h)
|-
|Pepijn Tennebroek
|9.5h
|Presentation (2h), Meeting 1 (2h), Conclusion and future work (2.5h), State-of-the-art (2h), Sensors (1h)
|}


Agarwal, T. (2019, July 25). ''Vibration Sensor: Working, Types and Applications''. ElProCus - Electronic Projects for Engineering Students. <nowiki>https://www.elprocus.com/vibration-sensor-working-and-applications/#:~:text=The%20vibration%20sensor%20is%20also,changing%20to%20an%20electrical%20charge</nowiki>


Agarwal, T. (2019, August 16). ''Sound Sensor: Working, Pin Configuration and Its Applications''. ElProCus - Electronic Projects for Engineering Students. <nowiki>https://www.elprocus.com/sound-sensor-working-and-its-applications/</nowiki>
===User Study 1===
 
Agarwal, T. (2022, May 23). ''Different Types of Motion Sensors And How They Work''. ElProCus - Electronic Projects for Engineering Students. <nowiki>https://www.elprocus.com/working-of-different-types-of-motion-sensors/</nowiki>
 
AJLabs. (2023). Infographic: How big were the earthquakes in Turkey, Syria? Earthquakes News | Al Jazeera. https://www.aljazeera.com/news/2023/2/8/infographic-how-big-were-the-earthquakes-in-turkey-syria
 
Al-Naji, A., Perera, A. G., Mohammed, S. L., & Chahl, J. (2019). Life Signs Detector Using a Drone in Disaster Zones. Remote Sensing, 11(20), 2441. <nowiki>https://doi.org/10.3390/rs11202441</nowiki>
 
Ambe, Y., Yamamoto, T., Kojima, S., Takane, E., Tadakuma, K., Konyo, M., & Tadokoro, S. (2016). Use of active scope camera in the Kumamoto Earthquake to investigate collapsed houses. International Symposium on Safety, Security, and Rescue Robotics. <nowiki>https://doi.org/10.1109/ssrr.2016.7784272</nowiki>
 
Anthes, Gary. Robots Gear Up for Disaster Response. Communications of the ACM (2010): 15, 16. Web. 10 Oct. 2012
 
Auf der Maur, P., Djambazi, B., Haberthür, Y., Hörmann, P., Kübler, A., Lustenberger, M., Sigrist, S., Vigen, O., Förster, J., Achermann, F., Hampp, E., Katzschmann, R. K., & Siegwart, R. (2021). RoBoa: Construction and Evaluation of a Steerable Vine Robot for Search and Rescue Applications. 2021 IEEE 4th International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 2021, pp. 15-20, doi: 10.1109/RoboSoft51838.2021.9479192.
 
Bangalkar, Y. V., & Kharad, S. M. (2015). Review Paper on Search and Rescue Robot for Victims of Earthquake and Natural Calamities. International Journal on Recent and Innovation Trends in Computing and Communication, 3(4), 2037-2040.
 
Blines. (2023). ''Project USAR''. Reddit. <nowiki>https://www.reddit.com/r/Urbansearchandrescue/comments/11lvoms/project_usar/</nowiki>
 
Blumenschein, L. H., Coad M. M., Haggerty D. A., Okamura A. M., & Hawkes E. W. (2020). Design, Modeling, Control, and Application of Everting Vine Robots. https://doi.org/10.3389/frobt.2020.548266
 
Boston Dynamics, Inc. (2019). Robotically negotiating stairs (Patent Nr. 11,548,151). Justia. <nowiki>https://patents.justia.com/patent/11548151</nowiki>
 
Coad, M. M., Blumenschein, L. H., Cutler, S., Zepeda, J. A. R., Naclerio, N. D., El-Hussieny, H., ... & Okamura, A. M. (2019). Vine Robots. IEEE Robotics & Automation Magazine, 27(3), 120-132.
 
Coad, M. M., Blumenschein, L. H., Cutler, S., Zepeda, J. A. R., Naclerio, N. D., El-Hussieny, H., Mehmood, U., Ryu, J., Hawkes, E. W., & Okamura, A. M. (2020). Vine Robots: Design, Teleoperation, and Deployment for Navigation and Exploration. https://arxiv.org/pdf/1903.00069.pdf
 
Da Hu, Shuai Li, Junjie Chen, Vineet R. Kamat,Detecting, locating, and characterizing voids in disaster rubble for search and rescue, Advanced Engineering Informatics, Volume 42,2019,100974,ISSN 1474-0346, <nowiki>https://doi.org/10.1016/j.aei.2019.100974</nowiki>
 
De Cubber, G., Doroftei, D., Serrano, D., Chintamani, K., Sabino, R., & Ourevitch, S. (2013, October). The EU-ICARUS project: developing assistive robotic tools for search and rescue operations. In 2013 IEEE international symposium on safety, security, and rescue robotics (SSRR) (pp. 1-4). IEEE.
 
Delmerico, J., Mintchev, S., Giusti, A., Gromov, B., Melo, K., Horvat, T., Cadena, C., Hutter, M., Ijspeert, A., Floreano, D., Gambardella, L. M., Siegwart, R., & Scaramuzza, D. (2019). The current state and future outlook of rescue robotics. Journal of Field Robotics, 36(7), 1171–1191. <nowiki>https://doi.org/10.1002/rob.21887</nowiki>
 
Engineering, O. (2022a, October 14). Thermal Imaging Camera. <nowiki>https://www.omega.com/en-us/</nowiki>. <nowiki>https://www.omega.com/en-us/resources/thermal-imagers</nowiki>
 
F. Colas, S. Mahesh, F. Pomerleau, M. Liu and R. Siegwart, "3D path planning and execution for search and rescue ground robots," 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 2013, pp. 722-727, doi: 10.1109/IROS.2013.6696431. This paper presents a pathplanning system for a static 3D environment, with the use oflazy tensor voting.
 
Foxtrot841. (2023).''Technology in SAR''. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/
 
Hampson, M. (2022). Detecting Earthquake Victims Through Walls. IEEE Spectrum. <nowiki>https://spectrum.ieee.org/dopppler-radar-detects-breath</nowiki>
 
Hatazaki, K., Konyo, M., Isaki, K., Tadokoro, S., and Takemura, F. (2007)  Active scope camera for urban search and rescue IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 2007, pp. 2596-2602, doi: 10.1109/IROS.2007.4399386
 
Huamanchahua, D., Aubert, K., Rivas, M., Guerrero, E. L., Kodaka, L., & Guevara, D. C. (2022). Land-Mobile Robots for Rescue and Search: A Technological and Systematic Review. 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). <nowiki>https://doi.org/10.1109/iemtronics55184.2022.9795829</nowiki>
 
Instruments, M. (2021, May 16). What are Gas Detectors and How are They Used in Various Industries? MRU Instruments - Emissions Analyzers. <nowiki>https://mru-instruments.com/what-are-gas-detectors-and-how-are-they-used-in-various-industries/</nowiki>
 
Kamezaki, M., Ishii, H., Ishida, T., Seki, M., Ichiryu, K., Kobayashi, Y., Hashimoto, K., Sugano, S., Takanishi, A., Fujie, M. G., Hashimoto, S., & Yamakawa, H. (2016). Design of four-arm four-crawler disaster response robot OCTOPUS. International Conference on Robotics and Automation. <nowiki>https://doi.org/10.1109/icra.2016.7487447</nowiki>
 
Kawatsuma, S., Fukushima, M., & Okada, T. (2013). Emergency response by robots to Fukushima-Daiichi accident: summary and lessons learned. Journal of Field Robotics, 30(1), 44-63. doi: 10.1002/rob.21416
 
Kruijff, G. M., Kruijff-Korbayová, I., Keshavdas, S., Larochelle, B., Janíček, M., Colas, F., Liu, M., Pomerleau, F., Siegwart, R., N., Looije, R., Smets, N. J. J. M., Mioch, T., Van Diggelen, J., Pirri, F., Gianni, M., Ferri, F., Menna, M., Worst, R., . . . Hlaváč, V. (2014). Designing, developing, and deploying systems to support human–robot teams in disaster response. Advanced Robotics, 28(23), 1547–1570. <nowiki>https://doi.org/10.1080/01691864.2014.985335</nowiki>
 
Lee, S., Har, D., & Kum, D. (2016). Drone-Assisted Disaster Management: Finding Victims via Infrared Camera and Lidar Sensor Fusion. 2016 3rd Asia-Pacific World Congress on Computer Science and Engineering (APWC on CSE). <nowiki>https://doi.org/10.1109/apwc-on-cse.2016.025</nowiki>
 
Li, F., Hou, S., Bu, C., & Qu, B. (2022). Rescue Robots for the Urban Earthquake Environment. Disaster Medicine and Public Health Preparedness, 17. <nowiki>https://doi.org/10.1017/dmp.2022.98</nowiki>
 
Lindqvist, B., Karlsson, S., Koval, A., Tevetzidis, I., Haluška, J., Kanellakis, C., Agha-mohammadi, A. A., & Nikolakopoulos, G. (2022). Multimodality robotic systems: Integrated combined legged-aerial mobility for subterranean search-and-rescue. Robotics and Autonomous Systems, 154, 104134. <nowiki>https://doi.org/10.1016/j.robot.2022.104134</nowiki>
 
Liu, Y., Nejat, G. Robotic Urban Search and Rescue: A Survey from the Control Perspective. J Intell Robot Syst 72, 147–165 (2013). <nowiki>https://doi.org/10.1007/s10846-013-9822-x</nowiki>
 
ManOfDiscovery. (2023).''Technology in SAR''. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/
 
Matsuno, F., Sato, N., Kon, K., Igarashi, H., Kimura, T., Murphy, R. (2014). Utilization of Robot Systems in Disaster Sites of the Great Eastern Japan Earthquake. In: Yoshida, K., Tadokoro, S. (eds) Field and Service Robotics. Springer Tracts in Advanced Robotics, vol 92. Springer, Berlin, Heidelberg. <nowiki>https://doi.org/10.1007/978-3-642-40686-7_</nowiki>
 
Murphy, R. R. (2017). Disaster Robotics. Amsterdam University Press
 
Nosirov, K. K., Shakhobiddinov, A. S., Arabboev, M., Begmatov, S., and Togaev, O.T. (2020) "Specially Designed Multi-Functional Search And Rescue Robot," Bulletin of TUIT: Managementand Communication Technologies: Vol. 2 , Article 1. <nowiki>https://doi.org/10.51348/tuitmct211</nowiki>
 
Osumi, H. (2014). Application of robot technologies to the disaster sites. Report of JSME Research Committee on the Great East Japan Earthquake Disaster, 58-74
 
Park, S., Oh, Y., & Hong, D. (2017). Disaster response and recovery from the perspective of robotics. International Journal of Precision Engineering and Manufacturing, 18(10), 1475–1482. <nowiki>https://doi.org/10.1007/s12541-017-0175-4</nowiki>
 
Raibert, M. H. (2000). Legged Robots That Balance. MIT Press
 
Sanfilippo F, Azpiazu J, Marafioti G, Transeth AA, Stavdahl Ø, Liljebäck P. Perception-Driven Obstacle-Aided Locomotion for Snake Robots: The State of the Art, Challenges and Possibilities †. Applied Sciences. 2017; 7(4):336. <nowiki>https://doi.org/10.3390/app7040336</nowiki>
 
Tadokoro, S. (Ed.). (2009). Rescue robotics: DDT project on robots and systems for urban search and rescue. Springer Science & Business Media.
 
Tai, Y.; Yu, T.-T. Using Smartphones to Locate Trapped Victims in Disasters. Sensors 2022, 22, 7502. <nowiki>https://doi.org/10.3390/</nowiki> s22197502
 
Tenreiro Machado, J. A., & Silva, M. F. (2006). An Overview of Legged Robots
 
Uddin, Z., & Islam, M. (2016). Search and rescue system for alive human detection by semi-autonomous mobile rescue robot. 2016 International Conference on Innovations in Science, Engineering and Technology (ICISET). <nowiki>https://doi.org/10.1109/iciset.2016.7856489</nowiki>
 
Urban Search and Rescue Team. (2023). ''Update zaterdagmiddag''. USAR. <nowiki>https://www.usar.nl/</nowiki>
 
Vine robots. (z.d.). Vine Robot Base Bill of Materials [Dataset]. <nowiki>https://docs.google.com/document/d/116dmSj30YTTIdREIyxc65BnzAhu0tS0XKFy5Wbvn4V4/edit</nowiki>
 
Vine robots. (z.d.). Vine Robot Body Bill of Materials [Dataset]. <nowiki>https://docs.google.com/document/d/1OhQgyFUZ33Q_gsACnn7N8mZ0xsyPPV8LrQdCgpysmGw/edit</nowiki>
 
Vine robots. (z.d.). Vine Robot Base Building Instructions [Presentationslides; Google slides]. <nowiki>https://docs.google.com/presentation/d/1EgZlK4-h8C8dYxRFinapzcK7766c7cjgJA-kvjLgQuY/edit#slide=id.p</nowiki>
 
''What is a chemical sensor?'' (2019, September 24). Fierce Electronics. <nowiki>https://www.fierceelectronics.com/electronics/what-a-chemical-sensor</nowiki>
 
''What is GPS and how does it work?'' (n.d.). <nowiki>https://novatel.com/support/knowledge-and-learning/what-is-gps-gnss</nowiki>
 
''What is lidar?'' (n.d.). <nowiki>https://oceanservice.noaa.gov/facts/lidar.html</nowiki>
 
WinnerNot_aloser. (2023).''Technology in SAR''. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/
 
Zhao, L., Sun, G., Li, W., & Zhang, H. (2016). The design of telescopic universal joint for earthquake rescue robot. 2016 Asia-Pacific Conference on Intelligent Robot Systems (ACIRS). <nowiki>https://doi.org/10.1109/acirs.2016.7556189</nowiki>
 
Zepeda, J. A. R. (2022). The 1-minute vine robot. Vine Robots. https://www.vinerobots.org/build-one/the-simple-vine-robot/
 
Zook_Jo. (2023).''Technology in SAR''. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/
 
Zotomayor, C., (2021). This Vine Robot Is an Unstoppable Force With Tons of Applications. https://www.solidsmack.com/design/vine-robot/
 
==Appendix==
 
===<u>User Study 1</u>===
'''Query:'''
'''Query:'''


Line 979: Line 1,170:


''The costs add up.''" (Foxtrot841, 2023).  
''The costs add up.''" (Foxtrot841, 2023).  
===<u>User Study 2</u>===
===User Study 2===
'''Query:'''
'''Query:'''


Line 1,002: Line 1,193:
''As far as technologies we dont have thatbrobotics may solve..... remember in Prometheus. The Pups. They scanned the areas and were able to create a 3D rendering of the tunnels. Something like that would be amazing. It would help us look for void spaces or recreate the building based on what we see.''
''As far as technologies we dont have thatbrobotics may solve..... remember in Prometheus. The Pups. They scanned the areas and were able to create a 3D rendering of the tunnels. Something like that would be amazing. It would help us look for void spaces or recreate the building based on what we see.''


''Another limitation we have with the cameras is the boom and the boom length. We can only go in about 12' and thats if we get straight back. There are fiber optic cameras but they dont have a way to control them at the head. So a tethered walking robot with legs and a camera would allow us to go much deeper without having to get the rescue guys out to start breaking concrete''." (Blines, 2023).<br />
''Another limitation we have with the cameras is the boom and the boom length. We can only go in about 12' and thats if we get straight back. There are fiber optic cameras but they dont have a way to control them at the head. So a tethered walking robot with legs and a camera would allow us to go much deeper without having to get the rescue guys out to start breaking concrete''." (Blines, 2023).
 
===Sensor Results===
'''Range:'''
 
*Thermal Imaging Camera: 15km (Thermal Zoom Cameras, n.d.)
*Sound Sensor: 2.5m (Assembly, n.d.)
*Gas Detector: 190m (Baker Engineering and Risk Consultants, n.d.)
*LIDAR: 200m (Texas Instruments, n.d.)
*GPS: 20000km (GPS.gov: Space Segment, n.d.)
 
'''Resolution:'''


<br />
*Thermal Imaging Camera: This camera has a resolution of 640x480 (Camera Resolution and Range, n.d.)
*Sound Sensor: The resolution of a sound sensor is 0.1dB (PCE Instruments UK: Test Instruments, 2023)
*Gas Detector: The detector resolution is around 0.01 ppm (parts per million) (Analytika, n.d.)
*LIDAR: This sensor has a resolution of  0.2° in vertical and 2° in horizontal (Lyu et al., 2019)
*GPS: The GPS has a resolution of 86cm (Wang, 2017)
 
'''Accuracy:'''
 
*Thermal Imaging Camera: The camera has a 2ºC or 2% margin of error as stated by Teledyne FLIR (2016)
*Sound Sensor: The sound sensor has an accuracy of ± 3 dB (J., n.d.)
*Gas Detector: The detector has an accuracy of around 50 ppm (parts per million) which is 0.005% (CO2 Meter, 2022)
*LIDAR: This sensor has an accuracy of 10mm in range and for the mapping itself it has an accuracy of 1 cm for the x and y axis and 2 cm for the z axis (VectorNav, 2023)
*GPS: The GPS has an accuracy of 4.9 meters (Van Diggelen & Enge, 2015)
 
'''Cost:'''
 
*Thermal Imaging Camera: $200 (WellHealthWorks, 2022)
*Gas Detector: $300 (Seitron, n.d.)
*Sound Sensor: $35 (Thomann, n.d.)
*LIDAR: $144 (Shop - Wiseome Mini LiDAR Supplier, n.d.)
*GPS: $100 (Hensen, 2023)
 
===Project documents===
[https://docs.google.com/document/d/18f_DwGenpTPTsCKxf-CoWOPeGHHxH9a4Tki6O8UTvtk/edit?usp=sharing Test results]
 
===Simulation code===
breed [stimuli stimulus]
breed [robots robot]
stimuli-own [lifespan]
globals [should-stop?]
to setup
  clear-all
  cluster-debris
  setup-robots
  setup-stimuli-patches
  set should-stop? false
  reset-ticks
end
to cluster-debris
  ask n-of cluster-count patches [
    set pcolor gray
    let frontier [self] of neighbors6 with [pcolor != gray]
    let cluster-size cluster-min-size + random (cluster-max-size - cluster-min-size + 1)
    let current-size 1
    loop [
      if current-size >= cluster-size or empty? frontier [stop]
      ask one-of frontier [
        set pcolor gray
        set frontier remove self frontier
        set frontier sentence frontier [self] of neighbors6 with [pcolor != gray]
      ]
      set current-size current-size + 1
    ]
  ]
end
to setup-robots
  create-robots 1 [
    set color yellow
    setxyz x-start y-start z-start
    set pitch 270
    set heading 0
    pen-down
  ]
end
to setup-stimuli-patches
  ; create some green patches where the vine robot receives "good" stimuli
  ask n-of survivors patches [
    set pcolor green
  ]
  ; create some red patches where the vine robot receives "negative" stimuli
  ask n-of random-stimuli patches [
    set pcolor red
  ]
end
to go
  if should-stop? [stop]
  spawn-turtle
  move-turtles
  tick
end
to spawn-turtle
;  spawn turtles randomly
  if ticks mod spawn-interval = 0 [
    let spawn-patches patches with [pcolor = red]
    if any? spawn-patches [
      let random-patch one-of spawn-patches
      create-stimuli 100 [
        set lifespan random 50 ; passable parameter for max range of lifespan
        set color yellow
        setxyz [pxcor] of random-patch [pycor] of random-patch [pzcor] of random-patch
        set pitch random 360
      ]
    ]
    ; Spawn stimuli from survivor patches
    ; When robot is within 5 blocks of goal patches change stimuli color. Robot tells patch
    let goal-patches patches with [pcolor = green]
    if any? goal-patches [
      let random-patch one-of goal-patches
      create-stimuli 10 [
        set lifespan random 50 ; passable parameter for max range of lifespan
        set color red
        setxyz [pxcor] of random-patch [pycor] of random-patch [pzcor] of random-patch
        set pitch random 360
      ]
    ]
  ]
end
to move-turtles
    ask robots [
    let target get-target-patch
    ;; stop if there are no possible moves
    if target = nobody [
      set should-stop? true
      user-message "The robot got stuck!"
      stop
    ]
    face target
    move-to target
    if [pcolor = green] of patch-here [
      set should-stop? true
      user-message (word "A survivor was found at " patch-here)
    ]
  ]
  ; move stimuli
  ask stimuli [
    let pre-move patch-here
    fd 2
    if distance pre-move != distance-nowrap pre-move [die] ;stimulus wrapped
    right random 20
    left random 20
    tilt-up random 20
    tilt-down random 20
    set lifespan lifespan - random 20 ; passable parameter for lifespan?
    death ;stimuli disappear after random amount of ticks
  ]
end
to death  ; turtle procedure (i.e. both wolf and sheep procedure)
  ; when energy dips below zero, die
  if lifespan < 0 [ die ]
end
;; Returns the patch towards which the robot should move, or nobody if there are no possible moves
to-report get-target-patch
  let view-radius 10
  let patches-in-view patches in-cone view-radius 179 with [distance-nowrap myself <= view-radius]
  let possible-targets patches in-cone 1.9 110 with [ ; patches towards which the robot can move
    pcolor != gray                ; debris clusters
    and distance-nowrap myself > 0 ; current patch
    and distance-nowrap myself < 2 ; wrapping
  ]
  if any? patches-in-view with [pcolor = green] [
    ;; Try to move towards a green patch in view
    let target-in-view one-of patches-in-view with [pcolor = green]
    report one-of possible-targets with-min [distance target-in-view]
  ]
  ask patches-in-view with [pcolor = red] [set pcolor magenta]
  let stimuli-in-view stimuli in-cone view-radius 179 with [distance-nowrap myself <= view-radius]
  if any? stimuli-in-view [
    ;; Try to move towards the furthest stimulus in view
    let furthest-stimulus one-of stimuli-in-view with-max [distance myself]
    report one-of possible-targets with-min [distance furthest-stimulus]
  ]
  ;; TODO: informed choice in absence of stimuli
  report one-of possible-targets
end
@#$#@#$#@
GRAPHICS-WINDOW
0
0
620
621
-1
-1
12.0
1
10
1
1
1
0
1
1
1
-25
25
-25
25
-25
25
1
1
1
ticks
30.0
BUTTON
10
11
87
44
NIL
setup
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1
TEXTBOX
114
17
285
35
Debris cluster settings
16
0.0
1
SLIDER
114
51
286
84
cluster-count
cluster-count
0
500
250.0
1
1
NIL
HORIZONTAL
SLIDER
114
102
286
135
cluster-min-size
cluster-min-size
1
cluster-max-size
25.0
1
1
NIL
HORIZONTAL
SLIDER
114
151
286
184
cluster-max-size
cluster-max-size
cluster-min-size
100
50.0
1
1
NIL
HORIZONTAL
SLIDER
115
199
287
232
spawn-interval
spawn-interval
0
25
1.0
1
1
NIL
HORIZONTAL
SLIDER
116
241
289
274
survivors
survivors
0
10
1.0
1
1
NIL
HORIZONTAL
SLIDER
114
287
287
320
random-stimuli
random-stimuli
0
100
50.0
1
1
NIL
HORIZONTAL
BUTTON
12
60
88
94
NIL
go
T
1
T
OBSERVER
NIL
NIL
NIL
NIL
1
TEXTBOX
363
21
513
41
Robot start settings
16
0.0
1
INPUTBOX
336
56
525
116
x-start
0.0
1
0
Number
INPUTBOX
334
137
524
197
y-start
0.0
1
0
Number
INPUTBOX
333
208
523
268
z-start
25.0
1
0
Number
BUTTON
568
114
711
147
Follow Vine Robot
follow robot 0
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1
TEXTBOX
569
18
719
41
Perspective
16
0.0
1
BUTTON
568
58
711
91
Watch Vine Robot
watch robot 0
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1
BUTTON
569
163
712
196
Ride Vine Robot
ride robot 0
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1
BUTTON
570
214
716
247
Reset Perspective
reset-perspective
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1
@#$#@#$#@
@#$#@#$#@
default
true
0
Polygon -7500403 true true 150 5 40 250 150 205 260 250
airplane
true
0
Polygon -7500403 true true 150 0 135 15 120 60 120 105 15 165 15 195 120 180 135 240 105 270 120 285 150 270 180 285 210 270 165 240 180 180 285 195 285 165 180 105 180 60 165 15
arrow
true
0
Polygon -7500403 true true 150 0 0 150 105 150 105 293 195 293 195 150 300 150
box
false
0
Polygon -7500403 true true 150 285 285 225 285 75 150 135
Polygon -7500403 true true 150 135 15 75 150 15 285 75
Polygon -7500403 true true 15 75 15 225 150 285 150 135
Line -16777216 false 150 285 150 135
Line -16777216 false 150 135 15 75
Line -16777216 false 150 135 285 75
bug
true
0
Circle -7500403 true true 96 182 108
Circle -7500403 true true 110 127 80
Circle -7500403 true true 110 75 80
Line -7500403 true 150 100 80 30
Line -7500403 true 150 100 220 30
butterfly
true
0
Polygon -7500403 true true 150 165 209 199 225 225 225 255 195 270 165 255 150 240
Polygon -7500403 true true 150 165 89 198 75 225 75 255 105 270 135 255 150 240
Polygon -7500403 true true 139 148 100 105 55 90 25 90 10 105 10 135 25 180 40 195 85 194 139 163
Polygon -7500403 true true 162 150 200 105 245 90 275 90 290 105 290 135 275 180 260 195 215 195 162 165
Polygon -16777216 true false 150 255 135 225 120 150 135 120 150 105 165 120 180 150 165 225
Circle -16777216 true false 135 90 30
Line -16777216 false 150 105 195 60
Line -16777216 false 150 105 105 60
car
false
0
Polygon -7500403 true true 300 180 279 164 261 144 240 135 226 132 213 106 203 84 185 63 159 50 135 50 75 60 0 150 0 165 0 225 300 225 300 180
Circle -16777216 true false 180 180 90
Circle -16777216 true false 30 180 90
Polygon -16777216 true false 162 80 132 78 134 135 209 135 194 105 189 96 180 89
Circle -7500403 true true 47 195 58
Circle -7500403 true true 195 195 58
circle
false
0
Circle -7500403 true true 0 0 300
circle 2
false
0
Circle -7500403 true true 0 0 300
Circle -16777216 true false 30 30 240
cow
false
0
Polygon -7500403 true true 200 193 197 249 179 249 177 196 166 187 140 189 93 191 78 179 72 211 49 209 48 181 37 149 25 120 25 89 45 72 103 84 179 75 198 76 252 64 272 81 293 103 285 121 255 121 242 118 224 167
Polygon -7500403 true true 73 210 86 251 62 249 48 208
Polygon -7500403 true true 25 114 16 195 9 204 23 213 25 200 39 123
cylinder
false
0
Circle -7500403 true true 0 0 300
dot
false
0
Circle -7500403 true true 90 90 120
face happy
false
0
Circle -7500403 true true 8 8 285
Circle -16777216 true false 60 75 60
Circle -16777216 true false 180 75 60
Polygon -16777216 true false 150 255 90 239 62 213 47 191 67 179 90 203 109 218 150 225 192 218 210 203 227 181 251 194 236 217 212 240
face neutral
false
0
Circle -7500403 true true 8 7 285
Circle -16777216 true false 60 75 60
Circle -16777216 true false 180 75 60
Rectangle -16777216 true false 60 195 240 225
face sad
false
0
Circle -7500403 true true 8 8 285
Circle -16777216 true false 60 75 60
Circle -16777216 true false 180 75 60
Polygon -16777216 true false 150 168 90 184 62 210 47 232 67 244 90 220 109 205 150 198 192 205 210 220 227 242 251 229 236 206 212 183
fish
false
0
Polygon -1 true false 44 131 21 87 15 86 0 120 15 150 0 180 13 214 20 212 45 166
Polygon -1 true false 135 195 119 235 95 218 76 210 46 204 60 165
Polygon -1 true false 75 45 83 77 71 103 86 114 166 78 135 60
Polygon -7500403 true true 30 136 151 77 226 81 280 119 292 146 292 160 287 170 270 195 195 210 151 212 30 166
Circle -16777216 true false 215 106 30
flag
false
0
Rectangle -7500403 true true 60 15 75 300
Polygon -7500403 true true 90 150 270 90 90 30
Line -7500403 true 75 135 90 135
Line -7500403 true 75 45 90 45
flower
false
0
Polygon -10899396 true false 135 120 165 165 180 210 180 240 150 300 165 300 195 240 195 195 165 135
Circle -7500403 true true 85 132 38
Circle -7500403 true true 130 147 38
Circle -7500403 true true 192 85 38
Circle -7500403 true true 85 40 38
Circle -7500403 true true 177 40 38
Circle -7500403 true true 177 132 38
Circle -7500403 true true 70 85 38
Circle -7500403 true true 130 25 38
Circle -7500403 true true 96 51 108
Circle -16777216 true false 113 68 74
Polygon -10899396 true false 189 233 219 188 249 173 279 188 234 218
Polygon -10899396 true false 180 255 150 210 105 210 75 240 135 240
house
false
0
Rectangle -7500403 true true 45 120 255 285
Rectangle -16777216 true false 120 210 180 285
Polygon -7500403 true true 15 120 150 15 285 120
Line -16777216 false 30 120 270 120
leaf
false
0
Polygon -7500403 true true 150 210 135 195 120 210 60 210 30 195 60 180 60 165 15 135 30 120 15 105 40 104 45 90 60 90 90 105 105 120 120 120 105 60 120 60 135 30 150 15 165 30 180 60 195 60 180 120 195 120 210 105 240 90 255 90 263 104 285 105 270 120 285 135 240 165 240 180 270 195 240 210 180 210 165 195
Polygon -7500403 true true 135 195 135 240 120 255 105 255 105 285 135 285 165 240 165 195
line
true
0
Line -7500403 true 150 0 150 300
line half
true
0
Line -7500403 true 150 0 150 150
link
true
0
Line -7500403 true 150 0 150 300
link direction
true
0
Line -7500403 true 150 150 30 225
Line -7500403 true 150 150 270 225
pentagon
false
0
Polygon -7500403 true true 150 15 15 120 60 285 240 285 285 120
person
false
0
Circle -7500403 true true 110 5 80
Polygon -7500403 true true 105 90 120 195 90 285 105 300 135 300 150 225 165 300 195 300 210 285 180 195 195 90
Rectangle -7500403 true true 127 79 172 94
Polygon -7500403 true true 195 90 240 150 225 180 165 105
Polygon -7500403 true true 105 90 60 150 75 180 135 105
plant
false
0
Rectangle -7500403 true true 135 90 165 300
Polygon -7500403 true true 135 255 90 210 45 195 75 255 135 285
Polygon -7500403 true true 165 255 210 210 255 195 225 255 165 285
Polygon -7500403 true true 135 180 90 135 45 120 75 180 135 210
Polygon -7500403 true true 165 180 165 210 225 180 255 120 210 135
Polygon -7500403 true true 135 105 90 60 45 45 75 105 135 135
Polygon -7500403 true true 165 105 165 135 225 105 255 45 210 60
Polygon -7500403 true true 135 90 120 45 150 15 180 45 165 90
square
false
0
Rectangle -7500403 true true 30 30 270 270
square 2
false
0
Rectangle -7500403 true true 30 30 270 270
Rectangle -16777216 true false 60 60 240 240
star
false
0
Polygon -7500403 true true 151 1 185 108 298 108 207 175 242 282 151 216 59 282 94 175 3 108 116 108
target
false
0
Circle -7500403 true true 0 0 300
Circle -16777216 true false 30 30 240
Circle -7500403 true true 60 60 180
Circle -16777216 true false 90 90 120
Circle -7500403 true true 120 120 60
tree
false
0
Circle -7500403 true true 118 3 94
Rectangle -6459832 true false 120 195 180 300
Circle -7500403 true true 65 21 108
Circle -7500403 true true 116 41 127
Circle -7500403 true true 45 90 120
Circle -7500403 true true 104 74 152
triangle
false
0
Polygon -7500403 true true 150 30 15 255 285 255
triangle 2
false
0
Polygon -7500403 true true 150 30 15 255 285 255
Polygon -16777216 true false 151 99 225 223 75 224
truck
false
0
Rectangle -7500403 true true 4 45 195 187
Polygon -7500403 true true 296 193 296 150 259 134 244 104 208 104 207 194
Rectangle -1 true false 195 60 195 105
Polygon -16777216 true false 238 112 252 141 219 141 218 112
Circle -16777216 true false 234 174 42
Rectangle -7500403 true true 181 185 214 194
Circle -16777216 true false 144 174 42
Circle -16777216 true false 24 174 42
Circle -7500403 false true 24 174 42
Circle -7500403 false true 144 174 42
Circle -7500403 false true 234 174 42
turtle
true
0
Polygon -10899396 true false 215 204 240 233 246 254 228 266 215 252 193 210
Polygon -10899396 true false 195 90 225 75 245 75 260 89 269 108 261 124 240 105 225 105 210 105
Polygon -10899396 true false 105 90 75 75 55 75 40 89 31 108 39 124 60 105 75 105 90 105
Polygon -10899396 true false 132 85 134 64 107 51 108 17 150 2 192 18 192 52 169 65 172 87
Polygon -10899396 true false 85 204 60 233 54 254 72 266 85 252 107 210
Polygon -7500403 true true 119 75 179 75 209 101 224 135 220 225 175 261 128 261 81 224 74 135 88 99
wheel
false
0
Circle -7500403 true true 3 3 294
Circle -16777216 true false 30 30 240
Line -7500403 true 150 285 150 15
Line -7500403 true 15 150 285 150
Circle -7500403 true true 120 120 60
Line -7500403 true 216 40 79 269
Line -7500403 true 40 84 269 221
Line -7500403 true 40 216 269 79
Line -7500403 true 84 40 221 269
x
false
0
Polygon -7500403 true true 270 75 225 30 30 225 75 270
Polygon -7500403 true true 30 75 75 30 270 225 225 270
@#$#@#$#@
NetLogo 3D 6.3.0
@#$#@#$#@
@#$#@#$#@
@#$#@#$#@
@#$#@#$#@
@#$#@#$#@
default
0.0
-0.2 0 0.0 1.0
0.0 1 1.0 0.0
0.2 0 0.0 1.0
link direction
true
0
Line -7500403 true 150 150 90 180
Line -7500403 true 150 150 210 180
@#$#@#$#@
0
@#$#@#$#@

Latest revision as of 08:38, 10 April 2023

Group members

Name Student Number Study
Clinton Emok 1415115 BCS
Richard Farla 1420380 BCS
Yash Israni 1415883 BCS
Tessa de Jong 1498312 BPT
Kaj Scholer 1567942 BME
Pepijn Tennebroek 1470221 BPT

Abstract

The use of robotics in disaster response operations has gained significant attention in recent years. In particular, robotics is being used to navigate and locate survivors through complex terrain. However, current rescue robots have multiple limitations which make it difficult for them to be used successfully. This paper proposes and argues for the use of low-cost vine robots as a promising technology for search and rescue missions in urban areas affected by earthquakes. Vine robots are small, flexible and lightweight robots that can navigate through tight spaces and confined areas, making them ideal for searching collapsed buildings for survivors. They are relatively cheaper than the current state of the art search and rescue robots. However, vine robots have not been implemented into real-life search and rescue missions due to various limitations. In this paper, the limitations are addressed, specifically with regards to the path finding and localization capabilities. This includes research into components that support the vine robot in its capabilities such as sensors, where a comparison is made to figure out what type of sensor would be best suited for extending the vine robots capability to be able to localize survivors. Furthermore, a simulation is made to investigate a localization algorithm that includes path finding capabilities, to conclude the usefulness of using such a robot during a search and rescue mission when looking through a collapsed building. The findings of this paper conclude that further development needs to take place for the vine robot technology to be used successfully in search and rescue missions.

Introduction and research aims

“Two large earthquakes struck the southeastern region of Turkey near the border with Syria on Monday, killing thousands and toppling residential buildings across the region.” (AJLabs, 2023) The earthquakes were both above 7.5 on the Richter scale, which caused buildings to be displaced from foundations with people still in them. Some people survived the fall when a building collapsed, but were trapped in all of the rubble.

Earthquakes are one of the most devastating natural disasters that can occur in urban areas, leading to significant damage to infrastructure, loss of life and displacement of communities. In recent years, search and rescue operations have become increasingly important in the aftermath of earthquakes. These operations aim to locate victims trapped under collapsed buildings and provide them with the necessary medical care and assistance. However, these operations can be challenging due to the complexity and dangers associated with navigating through the rubble of damaged buildings. To address these challenges, technology has emerged as promising for search and rescue operations in urban areas affected by earthquakes. Technology has the potential to significantly improve the efficiency and effectiveness of search and rescue operations, as well as reduce the risk to human rescuers.

This paper provides an overview of the usage technology and robotics in the localization of victims of earthquakes that lead to infrastructure damage, specifically in urban areas. It will further go into the design and capabilities of robotics, highlighting potential advantages over traditional search and rescue methods. The paper will also discuss challenges associated with the usage of technology and robotics, such as limited battery life, difficulties in controlling the robot in complex environments and the need for specialized training for operators.

The research aims addressed in this paper include:

  1. What are the potential advantages and challenges associated with the usage of low-cost robotics and technology in earthquake response efforts?
  2. How have robots been deployed in recent earthquake disasters and what has their impact on search and rescue operations been?
  3. What are the future prospects of low-cost robots in earthquake response efforts?
  4. How does a localization and path-planning algorithm for urban search and rescue look like?

State-of-the-art literature

Existing rescue robots

The Current State and Future Outlook of Rescue Robotics

There exist various state-of-the-art rescue robots with promising future outlooks. However, achieving full autonomy in real-world rescue scenarios is presently challenging to implement. In fact, there is a marked inclination towards semi-autonomous behaviors, rather than complete manual control (Delmerico et al., 2019).

Robotically negotiating stairs

The technique for traversing stairs involves obtaining image data depicting a robot navigating through a stair-filled environment using either one or two legs. This method is highly comparable to crossing terrain that is cluttered with debris and fallen objects (Boston Dynamics, Inc., 2019).

Design of four-arm four-crawler disaster response robot OCTOPUS

Four-arm disaster response robot OCTOPUS (Kamezaki et al., 2016).

The OCTOPUS robot, presented in this paper, boasts four arms and four crawlers, providing exceptional mobility and flexibility. Its arms are engineered with multiple degrees of freedom, enabling the robot to execute intricate tasks such as lifting heavy objects and opening doors. Furthermore, the crawlers are designed to offer stability and traction, which allow the robot to move seamlessly on irregular and slippery surfaces (Kamezaki et al., 2016).

Technology in rescue robots

Specially Designed Multi-Functional Search And Rescue Robot

In this paper, a sensor-based multi-functional search and rescue robot system for use in emergency situations is designed. They provide an insight on the various sensors that are included in this robot system, which are the following: servo motor, USB camera, DC motor, motor driver module, stepper motor, Darlington transistor arrays, ultrasonic sensor, Raspberry Pi 3 model B+ and Arduino mega. This robot system can search humans in ruined areas and send the collected data to a web server, which is then able to video steam in real-time (Nosirov et al., 2020).

Life Signs Detector Using a Drone in Disaster Zones

A new computer vision system has been developed to detect vital signs in hazardous zones using drones. The outcomes of the study indicate that the system can detect breathing patterns from an aerial platform with a high level of accuracy. The system effectively differentiates between humans and mannequins in daylight, with the aid of a human operator and a robust PC equipped with MATLAB (Al-Naji et al., 2019).

Search and Rescue System for Alive Human Detection by Semi-Autonomous Mobile Rescue Robot

Search and rescue system for alive human detection by semi-autonomous mobile rescue robot (Uddin & Islam 2016).

This study introduces a cheap robot designed for detecting humans in perilous rescue missions. The paper presents several system block diagrams and flowcharts to demonstrate the robot's operational capabilities. The robot incorporates a PIR sensor and IP camera to detect human presence through their infrared radiation. These sensors are readily available and cost-effective compared to other urban search and rescue robots (Uddin & Islam 2016).

Legged Robots That Balance

This study has significant implications for theories on human motor control and lays a fundamental groundwork for legged locomotion, which is one of the least explored areas of robotics. The study addresses the potential of constructing functional legged robots that can run and maintain balance (Raibert, 2000).

An Overview of Legged Robots

The present paper highlights the progress and the current state-of-the-art in the field of legged locomotion systems. The study examines various possibilities for mobile robots, including artificial legged locomotion systems, and discusses their benefits and drawbacks. It explores the advantages and limitations of such systems, providing insights into their potential for future advancements (Tenreiro Machado & Silva, 2006).

Designing, developing, and deploying systems to support human–robot teams in disaster response

The focus of this paper is on the creation, construction, and implementation of systems that aid in human-robot teams during disaster response. The paper has resulted in significant advancements in robot mapping, robot autonomy for operating in challenging terrain, collaborative planning, and human-robot interaction. The presentation of information considers the fact that these contexts can be stressful, with individuals working under varying levels of cognitive load (Kruijff et al., 2014).

Perception-Driven Obstacle-Aided Locomotion for Snake Robots: The State of the Art, Challenges and Possibilities †

Snake robots have the potential to be equipped with sensors and tools to transport materials to areas that are hazardous or inaccessible to other robots and humans. Their flexible and slender design allows them to navigate through narrow and confined spaces, making them ideal for performing tasks in environments such as collapsed buildings or underground tunnels. By incorporating sensors and specialized equipment, snake robots can operate in a variety of hazardous environments, including those with high levels of radiation or toxic chemicals, without putting human workers at risk. They "expanded the description for increasing the level of autonomy within three main robot technology areas: guidance, navigation, and control" (Sanfillipo et al., 2017).

Robotic Urban Search and Rescue: A Survey from the Control Perspective

The current paper presents an extensive review of advancements in robotic control for urban search and rescue (USAR) settings, which is an exciting and challenging field of research. The paper covers the development of low-level controllers to facilitate rescue robot autonomy, task sharing between the operator and robot for multiple tasks, and high-level control schemes designed for multi-robot rescue teams. These innovations aim to improve the functionality, reliability, and effectiveness of rescue robots in disaster response scenarios, which can help save lives and minimize damage (Liu & Nejat, 2013).

Robots Gear Up for Disaster Response

Active scope camera for urban search and rescue. Turning motion in narrow gaps (Hatazaki et al., 2007).

Although brilliant robotic technology exists, there is a need to integrate it into complete, robust systems. Furthermore, there is a need to develop sensors and other components that are smaller, stronger, and more affordable (Anthes 2010).

Active scope camera for urban search and rescue

The focus of this paper is the design and implementation of an Active Scope Camera (ASC) for urban search and rescue (USAR) operations. The ASC is a small, lightweight device that can be deployed to explore confined spaces and provide visual information to rescuers (Hatazaki et al., 2007).

The design of telescopic universal joint for earthquake rescue robot

This paper describes a transmission system that includes a telescopic universal joint, which is used in a snake-like search and rescue robot. The paper emphasizes the importance of designing flexible and adaptable robotic systems that can be used effectively in rescue operations. The telescopic universal joint is highlighted as a promising solution to enhance the capabilities of rescue robots. The paper provides details on the design and construction of the joint, which allows the robot to navigate through narrow spaces and tight corners while maintaining its structural integrity. By improving the flexibility and mobility of rescue robots, such as with the telescopic universal joint, search and rescue operations can become more effective and efficient, potentially saving lives in critical situations (Zhao et al., 2016).

Multimodality robotic systems: Integrated combined legged-aerial mobility for subterranean search-and-rescue

The legged-aerial explorer with its full sensor suite and the UAV carrier platform (Lindqvist et al., 2022).

This article discusses a Boston Dynamics Spot robot that is enhanced with a UAV carrier platform, and an autonomy sensor payload (Lindqvist et al., 2022). The paper demonstrates how to integrate hardware and software with each other and with the architecture of the robot.

The EU-ICARUS project: Developing assistive robotic tools for search and rescue operations

This paper describes the ICARUS tools (De Cubber et al., 2013). One of these tools is a small lightweight camera system that should be able to detect human survivors. According to this article, an infrared sensor with high sensitivity in the mid-IR wavelength range would be the most adequate detection instrument.

This paper mentions that technological tools are no good for USAR teams if they do not know how to make use of them. Therefore, training and support infrastructure is required. In order to do this, e-training and trainers-simulators can be used.

Detecting Earthquake Victims Through Walls

This article is about a new radar system that is able to detect human breathing and movement through walls (Hampson, 2022). The system uses Doppler radar to measure small changes in electromagnetic waves caused by body movements and breathing. This could be helpful in USAR operations. However, the article also mentions concerns about privacy and researchers emphasize the need for ethical considerations.

Land-Mobile Robots for Rescue and Search: A Technological and Systematic Review

This article discusses sensors that are used to obtain data on the robot’s environment based on 26 papers on rescue and search robots (Huamanchahua et al., 2022). For remote teleoperation cameras are most often used (in 96% of the cases). For identifying victims, 3D mapping and/or image processing is most often used (96%). In around 50% of the papers, microphones are used for human detection. Furthermore, CO2 sensors are used in 73% of the papers. Lastly, in 30% of the cases, temperature sensors are used to measure the victim’s body temperature.

Review Paper on Search and Rescue Robot for Victims of Earthquake and Natural Calamities

This article reviews search and rescue robots designed to locate and assist survivors of natural calamities (Bangalkar & Kharad, 2015). It concludes that these robots have the potential to save lives after natural disasters, and it mentions the need for continued research and development in this area.

Rescue robots in action

Disaster Robotics

This book offers a comprehensive overview of rescue robotics within the broader context of emergency informatics. It provides a chronological summary of the documented deployments of robots in response to disasters and analyzes them formally. The book serves as a definitive guide to the theory and practice of previous disaster robotics (Murphy, 2017).

Application of robot technologies to the disaster sites

Kohga 3 in the gymnasium. Application of Robot Technologies to the Disaster Sites (Osumi, 2014).

For the first time during the Great East Japan Earthquake disaster, Japanese rescue robots were utilized in actual disaster sites. Their tele-operation function and ability to move on debris were essential due to the radioactivity and debris present (Osumi, 2014).

Utilization of Robot Systems in Disaster Sites of the Great Eastern Japan Earthquake

After deploying robots in recovery operations, three key lessons were learned. Firstly, rescue robots are valuable not only for response but also for economic and victim recovery. Secondly, disaster robots need to be optimized to suit the specific missions and stakeholder requirements. Lastly, human-robot interaction continues to pose a challenge (Matsuno et al., 2014).

Emergency response by robots to Fukushima-Daiichi accident: summary and lessons learned

Numerous lessons were learned from the emergency response of robots to the accident, with a focus on the organization and operation scheme as well as systemization (Kawatsuma et al., 2013).

Use of active scope camera in the Kumamoto Earthquake to investigate collapsed houses

The paper discusses the use of the Active Scope Camera (ASC) in the aftermath of the 2016 Kumamoto earthquake in Japan. It explains how the ASC was instrumental in providing crucial information to rescue teams, resulting in the successful rescue of multiple trapped occupants. The paper emphasizes the significance of utilizing advanced imaging technologies like the ASC in urban search and rescue operations, as they can improve the effectiveness and safety of rescue workers (Ambe et al., 2016)

Disaster response and recovery from the perspective of robotics

This paper presents a summary of the use of robotic operations in disaster scenarios. It examines the difficulties encountered by emergency responders and rescue teams in disaster-affected regions, including restricted access to the affected areas, dangerous conditions, and scarce resources (Park et al., 2017).

Drone-assisted disaster management: Finding victims via infrared camera and lidar sensor fusion

Drone hardware specification. Drone-assisted disaster management: Finding victims via infrared camera and lidar sensor fusion (Lee et al., 2016).

This article mentions that the use of drones has proven to be an efficient method to localize survivors in hard-to-reach areas (e.g., collapsed structures) (Lee et al., 2016). The paper presents a comprehensive framework for drone hardware that makes it possible to explore GPS-denied environments. Furthermore, the hokuyo lidar is used for global mapping, and the Intel RealSense for local mapping. The outcomes show that the combination of these sensors can assist USAR operations to find victims of natural disasters.

Boston Dynamics’ Spot Is Helping Chernobyl Move Towards Safe Decommissioning

This article presents the Boston Dynamics’ Spot robot. It discusses that the robot can be used to inspect and monitor the structural integrity of buildings (Ackerman, 2020). The robot has potential as it can perform dangerous tasks in hazardous environments which reduces the risk for human workers.

Rescue Robots for the Urban Earthquake Environment

The utilization of robots in USAR operations can decrease response time, enhance the effectiveness of rescue operations, and ensure the safety of USAR personnel (Li et al., 2022). However, when testing the robot in Italy, the robot operator experienced cognitive overload. The data provided by the robot was not intuitive, so much information still had to be managed by people. Consequently, rescue personnel with limited training were unable to effectively control the robot on site.

Earthquake sites

Rescue robotics: DDT project on robots and systems for urban search and rescue

Tremors caused by an earthquake destroy buildings, can generate tsunamis, and cause fires, landslides, etc. (Tadokoro, 2009). Due to this, inhabitants can be buried alive, burnt to death, or drowned. Immediate USAR is important as the survival rate decreases as time passes. Especially, the first 72 hours are important and called the golden 72 hours. “Many first responders state that they can rescue if the victim’s position is known. Often, search is beyond human ability” (Tadokoro, 2009, p. 11). This book names equipment used in USAR. Among others, these are an infrared camera to detect human survivors, microwave radar to detect heartbeats, and a fiber scope which is a bending camera.

USAR experiences

Urban Search and Rescue Team. Update zaterdagmiddag

Quote from the blog of the Dutch search team in Turkey: "We merken dat de honden vermoeid raken door de vele inzetten." (Urban Search and Rescue Team, 2023). This implements that dogs are getting tired after searching for survivors for several days.

Previous projects

PRE2015 2 Groep1 - Universal Swarm Robotics Software Project

Swarm robotics used for helping people in the debris after an earthquake. The robots communicate with each other to help each other in order to remove rubble.

PRE2018 3 Group17

This group presented drone robotics in order to seek for survivors. The drones were controlled via radio control on a 900 MHz frequency.

PRE2020_4_Group2

This project researched how a swarm of RoboBees (flying microbots) could be used in USAR operations. The main concern of this project was to identify the location of a gas leak and to communicate that information to the USAR team. They looked into infrared technology, wireless LAN and Bluetooth for communication.

Problem Identification

The problem that will be addressed throughout this paper is the development of a cost-effective vine robot to be used in urban search and rescue (USAR) missions. These missions are known to be extremely critical, which require precise technology to navigate and locate survivors through complex terrain. However, current rescue robots have multiple limitations which make it difficult for them to be used successfully. As a result, this paper will explore the vine robot as an alternative solution.

To this day, vine robots have not been implemented into critical USAR missions, since they do not have any navigation or localization abilities. Therefore, research needs to be conducted to solve this problem and allow vine robots to be used in real-life rescue missions. The most suitable sensors will need to be identified for localization, as well as the preferred navigation strategy. On top of this, the level of autonomy will need to be analyzed for the vine robots application.

Overall, current rescue robots have have limitations such as mobility issues, limited sensing and limited autonomy. As a result, this paper will explore these problems and come up with a viable solution for the vine robot to be used in real-life USAR applications.

USE

Users

Within urban search and rescue operations, several users of technology and robotics can be named.

First, the International Search and Rescue Advisory Group (INSARAG) determines the minimum international standards for urban search and rescue (USAR) teams (INSARAG – Preparedness Response, z.d.). This organization establishes a methodology for coordination in earthquake response. Therefore, this organization will have to weigh the pros and cons of using robotics in USAR. If INSARAG sees the added value of using robotics in search and rescue operations, it can promote the usage, and include it in the guidelines.

Second, governments will need to purchase all necessary equipment. For the Netherlands, Nationaal Instituut Publieke Veiligheid is the owner of all the equipment of the Dutch USAR team (Nederlands Instituut Publieke Veiligheid, 2023). This Institute will need to see the added value of the robot while taking into account the guidelines of INSARAG.

The third group of users consists of members of the USAR teams that will have to work with the technology on site. It will be used alongside other techniques that are already used right now. USAR teams are multidisciplinary and not all members of the team will come in contact with the robot (e.g., nurses or doctors). In order to properly use robotics and technology, USAR members who execute the search and rescue operation will need training. For the Dutch USAR team, this training can be conducted by the Staff Officer Education, Training and Exercise (Het team - USAR.NL, z.d.). USAR members will need to be able to set up the technology, navigate it inside a collapsed building (if it is not fully autonomous), read data that it provides, and find survivors with the help of the technology. Furthermore, they will need to decide whether it is safe to follow the path to a survivor. Lastly, team members will need to retract the technology and reuse it if possible.

Users' needs

In order to gather knowledge regarding the needs of these users, emails are sent out to several USAR teams, INSARAG, and Nationaal Instituut Publieke Veiligheid, containing the following questions:

  1. What type of sensors or technology do you use for localization?
  2. How do you localize survivors?
    • If there are methods that allow robots to go closer within rubble, are there specific things to keep in mind for localization?
  3. What can your current equipment not do and what would you like them to improve on?
  4. What is the main issue you have on current equipment?
  5. What makes a rescue operation expensive?
  6. What is the protocol for when you are unable to rescue a survivor? (e.g. assigning probabilities to survivors as resources are limited)

Questions for Nationaal Instituut Publieke Veiligheid:

  1. How is it determined what material is bought for USAR operations?
  2. How do you feel about using robotics in USAR operations?
  3. What makes a rescue operation expensive?

Furthermore, similar questions were posted on Reddit, which can be found in the Appendix. For the first query on Reddit several people responded, which can also be found in the Appendix. The answers keep coming back to the main problem right now with the technology, which is that the technology is expensive. The equipment that is used is expensive as is the training that the people have to have had for using their equipment. The second query on Reddit received one answer of an USAR search specialist. This specialist mentions that they do not really use technology right now because dogs are much more effective for finding an area that has an person in it. They use technology when trying to look for the exact spot that the person is in, but this technology has many disadvantages. Since they can not can reach deep into the debris.

Society

Society will benefit from technology and robotics as it will help USAR operations to localize victims and find a path within rubble to a victim. This will influence the time needed to search for survivors after earthquakes. This is important as the chances of surviving decrease with each passing day. Furthermore, replacing human rescuers or search dogs with a robot will put less danger on them. Lastly, robots will be able to go further in the debris without endangering humans or the technology itself. So, the usage of robots and technology will influence the number of people that can be saved.

Enterprise

If the robot or technology will be available for sale, the company behind it will have several interests. First, it will want to create a robot that can make a difference. The robot should help USAR operations with localizing survivors in the rubble. Second, the company will want to either make a profit or make enough money to break even. It will need money to invest back in the product to further improve the robot. For the company, it is important to take into account the guidelines of INSARAG as this institute will promote the usage of rescue robots in their global network.

Specifications

Before identifying the solutions for navigation and localization, a clear list of specifications for the robot/technology is given.

Must:

  • The vine robot must communicate from the tip of the vine, back to its starting location (the pump). This would allow the rescuers to communicate with the survivor by providing live feedback.
  • For its application, the vine robot must be semi-autonomous. A human operator must move the vine robot by command, as well as the robot maneuvering itself when xxx???
  • Sensors must be used for localizing the survivor. The exact sensors will be explored in the next chapter.
  • Since the vine robot is to be used in critical rescue missions, it must be easy to transport and quick to setup. This would make it easier to increase the number of vine robots used in USAR missions, which increases the overall success rate in localizing survivors

Should:

  • The vine robot should supply water and air once they have localized the survivor and navigated towards them. This would make sure that the survivor can stay conscious with essential needs before the rescuers come to evacuated them.
  • It should be relatively cheap and easy to manufacture, allowing them to be mass produced.

Could:

  • Once the autonomy level is sufficient enough, the vine robots could communicate with other vine robots. This swarm robotics would allow the vine robots to cover a larger area efficiently.
  • The vine robot could retract back to its original path. This would allow the rescuers to reuse the vine robot for other rescue missions instead of wasting the money and technology.
  • Inflate into safety structure, allowing the survivor to be protected from within, in case more rubble starts to fall on top of them.
  • (Create a 3D model of the environment)

Won't:

  • The vine robot is a technology that will only assist rescuers during USAR missions. It won't be able to actively get survivors out of situations, as it will not have the capabilities to evacuate them.
  • Supply heating or other health supplies other than water or air
  • Be infinitely long (it has a fixed length)
  • Be able to lift x kg
  • Be able to put out fires and melt in extreme heat

Vine robot

Vine Robot

With the help of the state-of-the-art, the vine robot was chosen as the design. Vine robots are soft, small, lightweight, and flexible robots designed to navigate through tight spaces, making them ideal for searching collapsed buildings for survivors. These robots utilize air pressure, which expands them through the tip, making them move and grow into a worm-like robot. The idea of the vine robot was brought up in 2017, making it a new robotic design. There have been some attempts to move the vine robot in chosen directions using a so-called muscle. This includes 4 tubes (muscles) around the large tube (vine robot), each with their own air supply. This allows the vine robot to move in every direction, depending on the pressure of the muscles.

However, the current vine robots lack the ability to navigate and localize, which is a critical requirement for them to be used successfully in such missions. The following chapters provide a research analysis on the best possible sensors that can be used for localization, as well as a simulation to find out what algorithm works best for navigation.

Now that the vine robot was determined as robotic design, the constraints of the robot can be established. This is mainly a brief description of how the robot can be designed. A complete design and manufacturing plan for the robot has not been taken into account in the scope of this report.

Constraints:

  • Size: A small vine robot can navigate through tight spaces more easily than larger robots, allowing it to search for survivors in areas that might be otherwise be inaccessible.
  • Diameter: The diameter of the robot that is chosen should be small enough to enable an air compressor to quickly fill the expanding volume of the body while facilitating growth also at low pressure. In research, the main body tube diameter of the robot is 5 cm (Coad et al., 2019).
  • Length: The robot body can grow as the fluid pressure inside the body is higher in comparison to the outside (Blumenschein, 2020).
  • Material: For the vine robot to be lightweight and flexible, the materials must share these properties.
    • Soft and flexible materials such as silicone or rubber can be used to mimic the flexibility of a vine. These materials can be easily deformed and are able to conform to complex shapes.
    • Carbon fiber can be used to the create the structural stability of the vine robot. Carbon fiber is a lightweight and strong material providing a high strength-to-weight ratio make it ideal for this application
  • Weight: The vine robot should be lightweight, so that it can be easily transported to the search site and placed into position by the rescue team. Additionally this provides the possibility to deploy the robot in unstable environment as it will not add significant stress to the collapsed structure.
  • Cost: The cost of the vine robot should be reasonable and affordable for rescue teams to deploy. For the base of the robot this will be about €700,-, including one vine robot body, additional bodies are €40,- per body (Vine robots, z.d.). The sensor need to be then included, based on which one is best after research.
  • Load Capacity: The vine robot makes sure that the pressure within the body is 3,5 psi (Vine robots, z.d.). This means that the pressure is 0.246 kg/cm and thus depending on the length of the body at that time how much it can hold.
  • Mobility: The vine robot should be able to navigate through tight spaces and climb up surfaces with ease. It can go trough gaps as small as 4.5 centimeter with a diameter of 7 centimeter (Coad et al., 2020).
  • Growth speed: The vine robot right now can grow at a maximum speed of 10 cm/s (Coad et al., 2020).
  • Turn radius: In research, a vine robot was able to round a 90 degree turn (Coad et al., 2019). The optimal turn radius supplied in current literature is 20 centimeters (Auf der Maur et al., 2021).

Localization

Current Difficulties

Earthquakes are one of the most devastating natural disasters, causing widespread destruction and loss of life. One of the biggest challenges that emergency responders and aid organizations face in the aftermath of an earthquake is localizing victims. This task can be extremely difficult due to a variety of factors, including the scale of the disaster, the nature of the terrain, and the complexity of the affected infrastructure.

Firstly, the scale of the disaster is often overwhelming, making it difficult for rescue teams to quickly locate and reach those in need of assistance. Earthquakes can cause extensive damage to buildings, roads, and other infrastructure, which can make it challenging for rescue teams to navigate the affected areas. Additionally, earthquakes can cause landslides, debris flows, and other hazardous conditions that can further impede rescue efforts.

Secondly, the nature of the terrain can also make it difficult to localize victims after earthquakes. Many earthquakes occur in mountainous or hilly areas, which can be challenging for rescue teams to access. These areas may have steep slopes, narrow paths, and other obstacles that can make it difficult to reach victims. Additionally, earthquakes can cause landslides and rockfalls, which can further complicate rescue efforts.

Thirdly, the complexity of the affected infrastructure can also pose challenges for rescue teams. Earthquakes can damage roads, bridges, and other infrastructure, which can make it difficult for rescue teams to access affected areas. In addition, damage to communication networks can make it difficult for rescue teams to coordinate their efforts and share information about the location of victims.

Lastly, the timing of earthquakes can also complicate rescue efforts. Earthquakes can occur at any time, day or night, and may cause power outages, making it difficult for rescue teams to operate in the dark. Additionally, aftershocks can further damage infrastructure and create additional hazards, making it difficult for rescue teams to work safely.

In conclusion, localizing victims after earthquakes is a challenging task that requires extensive planning, coordination, and resources. The scale of the disaster, the nature of the terrain, the complexity of the affected infrastructure, and the timing of the earthquake can all pose significant challenges for rescue teams.

Background of localization for vine robots

Infrared sensor module for Arduino

Localization of survivors is a critical task in search and rescue operations in destroyed buildings. Vine robots can play an important role in this task by using their flexibility and agility to navigate through complex and unpredictable environment and locate survivors. Localization involves determining the position of the robot and the position of any survivors in the environment and can be achieved through a variety of techniques and strategies.

An approach to localization is to use a combination of sensors and algorithms to detect and track the location of survivors. This can include sensors that detect sound, heat or movement as as algorithms that use this data to determine the location of survivors in the environment. What current technology has these capabilities?

Detecting of heat

Vine robots can be equipped with a range of sensors that enable them to detect heat in their surroundings. Infrared cameras and thermal imaging systems are among the most commonly used sensors for detecting heat in robots.

Infrared cameras

Infrared cameras work by detecting infrared radiation emitted by objects and converting it into visual representations that can be interpreted by the robot's control systems.

Thermal imaging with cold spots

Thermal imaging sensors

Thermal imaging systems use a more advanced technology that can detect temperature changes with higher precision, which can enable vine robots to identify potential sources of heat and determine the location and movement of individuals in a given environment

Contact sensors

Contact sensors can be used to detect heat sources that come into direct contact with the robot's sensors. For example, if a vine robot comes into contact with a hot object such as a stove or a piece of machinery, the heat from the object can be detected by the contact sensors.

Gas sensors

Gas sensors can be used to detect the presence of combustible gases such as methane or propane which can be an indicator of a potential fire or explosion

Detecting of sound

Detecting sound is another critical capability for vine robots especially in search and rescue operations. Vine robots can be equipped with a range of sensors that enable them to detect and interpret sound waves in their surroundings. A common type sensor used to detect sound is a microphone. Microphones can be used to capture sound waves in the environment and convert them into electrical signal that can be interpreted by the robot's control systems or be communicated back to a human operator for analysis

Ultrasonic sensors

Vine robots can also be equipped with ultrasonic sensors which enable them to detect sound waves that are beyond the range of human hearing. Ultrasonic sensors work by emitting high-frequency sound waves that bounce off objects in the environment and return to the sensor, producing an electrical signal that be interpreted by the robot's control systems.

Vibration sensors

In addition to ultrasonic sensors, vine robots can be equipped with vibration sensors, which can detect sound waves that are not audible to the human ear. Vibration sensors work by detecting the tiny vibrations in solid objects caused by sound waves passing through them. These vibrations are converted into electrical signals that can be interpreted by the robot's control systems.

Detecting movement

Detecting movement is another critical capability for vine robots, especially in search and rescue operations. Vine robots can be equipped with a range of sensors that enable them to detect and interpret movement in their surroundings.

Cameras

One common type of sensor used in vine robots for detecting movement is a camera. Cameras can be used to capture visual data from the environment and interpret it using computer vision algorithms to detect movement.

Motion sensors

Another type of sensor used in vine robots for detecting movement is a motion sensors.

Motion sensors or motion detectors are electronic devices that are designed to detect movement in their surrounding environment. They work by measuring changes in the level of infrared radiation, sound waves or vibrations caused by movement.

There are 4 types of motion sensors being used in the industry:

  1. Passive infrared (PIR) sensors: These sensors detect changes in the level of infrared radiation (heat) in their field of view caused by moving objects.
  2. Ultrasonic sensors: These sensors emit high-frequency sound waves and measure the time it takes for the waves to bounce back after an object. If an object moves in front of the sensors, it will cause a change in the time it takes for the sound waves to bounce back, which triggers an response.
  3. Microwave sensors: Similar to ultrasonic sensors. But these sensors emit electromagnetic waves instead.
  4. Vibration sensors: These sensors measure changes in acceleration caused by movement.

Evaluation of sensors in localization

The sensor types above have already explained the basics of detecting a human, where now an evaluation will be made, including other sensors.

Types of Sensors:

  • Thermal Imaging Camera: This sensor can create an image of an object by using infrared radiation emitted from the object. It can detect the heat of a person in dark environments, making them efficient for search and rescue missions. They can identify the location of a person by detecting their body heat through walls or other obstructions (Engineering, 2022)
  • Gas Detector: These are used to detect the presence of gases across an area, such as carbon monoxide and methane. They can detect carbon dioxide, which are exhaled by humans, to localize survivors. It can also be used to detect whether or not there are toxic gases that could pose risks to the rescuers. (Instruments, 2021)
  • Sound Sensor: This sensor detects the intensity of sound waves, where a microphone is used to provide input. They can be used to locate survivors that may be calling for help or detect sounds of people such as digging or moving debris. (Agarwal, 2019)
  • Light Detection and Ranging (LiDaR): These sensors uses light in the form of a pulsed laser in order to measure the distances tp objects. It can be used to create 3D maps of different environments and can detect movements, such as a survivor moving around debris. (What Is Lidar?, n.d.)
  • Global Positioning System (GPS): This is a satellite constellation, which provides accurate positioning and navigation measurements worldwide. They can track the movements of different people and can create maps of the environment to help rescuers locate survivors. (What Is GPS and How Does It Work?, n.d.)

These sensors are evaluated in the table below, which are ranked based on multiple sources. In order to get a better understanding outcomes, the results can be found in the appendix. The ranking criteria is as follows:

  • Range: Maximum distance the sensor can detect an object
  • Resolution: The smallest object the sensor can detect
  • Accuracy: The precision of the measurements
  • Cost: The cost of the sensor
Sensor Type Range Resolution Accuracy Cost Ranking
Thermal Imaging Camera High High Very high High 1
Gas Detector Medium High High High 5
Sound Sensor Medium High High Medium 2
Light Detection and Ranging (LiDaR) Medium High High High 3
Global Positioning System (GPS) Very High Medium Medium Medium 4

Based on the ranking, it was determined that thermal imaging cameras are the most effective sensors for localizing survivors under rubble. This is due to its ability to detect heat given off by a human body from a large range with very high accuracy. Additionally, this sensor can be used in dark environments, which some others cannot. Even if a survivor is buried under the rubble, their body will still be able to give off a heat signature that can be detected by a thermal imaging camera. However, one major limitation of only using this sensor is that no visual image of the surrounding environment can be provided. As a result, a visual camera can be used in conjunction with a thermal imaging camera, allowing rescuers to get a better picture of the situation by identifying potential obstacles and hazards.

Pathfinding

Pathfinding is the process of finding the shortest or most efficient path between two points in a given environment. For a vine robot, pathfinding is critical as it allows the robot to navigate through the porous environment of destroyed buildings and reach its intended destination. Pathfinding algorithms are typically used to determine the best route for the robot to take based on factors such as obstacles, terrain and distance.

One approach is to use a combination of reactive and deliberative pathfinding strategies. Reactive pathfinding involves using sensors to detect obstacles in real-time and making rapid adjustments to the robot's path to avoid them. This can be especially useful in environments where the obstacles and terrain are constantly changing such as in a destroyed building. However, reactive pathfinding algorithms are often less efficient than traditional pathfinding algorithms because they only consider local information and may not find the optimal path. Deliberative pathfinding on the other hand involves planning paths ahead of time based on a map or model of the environment. While this approach can be useful in some cases, it may not be practical in a destroyed building where the environment is constantly changing. Another approach is to use matching learning algorithms to train the robot to navigate through the environment based on real-world data. The latter approach is not the main focus of the paper and the combination is spoken about in further detail.

Pathfinding for vine robots

As described above, sensors are one of the critical components in reactive and deliberative pathfinding. Sensors provide the real-time information that the algorithm needs to navigate the vine robot through a dynamic environment, avoid obstacles, and adapt to changing conditions. In the case of the vine robot system, this mechanism of reactive pathfinding will be done to localize the survivor as is described in a previous section of this paper. Rather this section of the paper focuses on deliberative pathfinding, which includes sensory information being read from a radar to determine a model of the environment that has detected some signs of life within a collapsed building. It is clear that these radars currently require a surface area far larger then that available on the vine robot itself. However, there are improvements to reducing the size of these radars to get them working on a drone. Current experiments seem promising, but aren't fully realized within the research community (example of a drone with a radar). When this becomes a viable option, attaching such a radar to the vine robot should be possible.

Thus, a proposed solution for now is to have these large sensors in conjunction with the vine robot, thereby ensuring a higher chance of success to locate the survivor by having the rescue team first scan the environment and then set up the vine robot in the correct location. The built in a localization algorithm that includes path finding capabilities should then be able to correctly identify a survivor within the rubble. Although it is worth noting that this greatly increases the cost of the search and rescue mission as whole, it is important to keep in mind that this approach does not enhance the localization capabilities of the vine robot itself. The mechanisms of reactive pathfinding play a roll within the localization techniques of the vine robot itself. Therefore this is an extension that isn't necessary for the vine robot to work as intended but is able to enhance the success rate of finding survivors.

Ground Penetrating Radar (GPR)

The underlying technology for GPR is electromagnetic propagation. Many technologies realize detection, but only electromagnetic ones assure best results in terms of speed and accuracy of a relief, working also in a noisy environment. Three methods can be used to detect the presence of a person in a given area through the use of electromagnetic propagation. The first method detects active or passive electronic devices carried by the person, while the second method detects the person's body as a perturbation of the backscattered electromagnetic wave. In both cases, a discontinuity in the medium of electromagnetic wave propagation is required, which means that trapped individuals must have some degree of freedom of movement.However, these methods are only effective in homogeneous mediums, which is not always the case in collapsed buildings where debris can have stratified slabs and heavy rubble at different angles. Therefore, the retrieval of the dielectric contrast features cannot be used as a means for detection in these situations.

Moreover, the irregular surface and instability of ruins make linear scans of radar at soil contact impossible. Instead, the sensor system must be located in a static position or on board of an aerial vehicle to detect the presence of individuals. (Ferrara V., 2015)

Vital Sign Detection

Vital signs are detected using a continuous wave (CW) signal is transmitted towards human subject; the movements of heartbeat and breathing modulate the phase of signal, generating a Doppler shift on the reflected signal, which back to the receiver; and finally the demodulation of the received signal detects vital signs. These radar systems can be compact in size, making it feasible to fit on unmanned aerial vehicles (UAVs). Unfortunately, vibration and aerodynamic instability of a flying vehicle, together limit accuracy and the validity of the same measure. Aside from this the frequency in which these operate greatly impacts the penetration depth of the sensing capability. Currently, GPR systems have an increased sensitivity level, due to the switch to use ultra-wide band radars. The increase in sensitivity does improve the probability of detection, but at the same time, it can produce a bigger number of false alarms. There is research with regards to these two systems, but both still need further improvement before being able to work in present day situations. (Ferrara V., 2015)

Constant false alarm rate (CFAR)

Airborne ground penetrating radar can effectively survey large areas that may not be easily accessible, making it a valuable tool for underground imaging. One method to address this imaging challenge is by utilizing a regularized inverse scattering procedure, known as CFAR. CFAR detection is an adaptive algorithm commonly used in radar systems to detect target returns amidst noise, clutter, and interference. Among the CFAR algorithms, the cell averaging (CA) CFAR is significant, as it estimates the mean background level by averaging the signal level in M neighboring range cells. The CA-CFAR algorithm has shown promising capabilities in achieving a stable solution and providing well-focused images. By using this technique, the actual location and size of buried anomalies can be inferred.(Ilaria C. et al, 2012)

Ultra-wideband (UWB)

The proposed method for vital sign detection is highly robust and suitable for detecting multiple trapped victims in challenging environments with low signal-to-noise and clutter ratios. The removal of clutter has been effective, and the method can extract multiple vital sign information accurately and automatically. These findings have significant implications for disaster rescue missions, as they have the potential to improve the effectiveness of rescue operations by providing reliable and precise vital sign detection. Future research should focus on extending the method to suppress or separate any remaining clutter and developing a real-time and robust approach to detect vital signs in even more complex environments. (Yanyun X. et al, 2013)

Simulation

Goal

To evaluate the localization capabilities of the Vine Robot, a simulation was created. For a Vine Robot to accurately determine the location of a target, the robot must be able to get in close proximity to that target. As such, the robot’s localization algorithm must include pathfinding capabilities. The simulation will be used to test out how a custom developed localization algorithm picks up stimuli from the environment and reacts to these stimuli to then try and locate the survivor from whom these stimuli originate. For the simulation to succeed, the robot is required to reach the survivor’s location in at least 75% of the randomly generated scenarios.

The complexity of creating such a vine robot and its testing environment, due to the sheer amount of different possible scenarios and variables involved in finding survivors within debris, is currently outside of the scope of this course. However, it is believed that the noisy environment and the robot can be simulated to retain their essential properties. The simulated environment will be generated using clusters of debris, with random noise representing interference, and intermittent stimuli that indicate the presence of a survivor. The random noise and intermittent stimuli are critical in simulating the robot’s sensors, since the data received from the sensors are unreliable when going through debris.

Specification

A screenshot of the simulation environment with all components visible.

The simulation is made using NetLogo 3D. This software has been chosen for its simplicity in representing the environment using patches and turtles.

The vine robot is represented as a turtle. For algorithms involving swarm robotics, multiple turtles can be used. The robot can only move forwards with a maximum speed of 10 centimetres per second and a turning radius of at least 20 centimetres. The robot cannot move onto patches it has visited before, because the vine robot would then hit itself. The robot has a viewing angle of at most 180 degrees. When involving swarm robotics, it is not allowed for the different robots to cross each other either.

The environment is randomly filled with large chunks of debris. These are represented by grey patches. The robot may not move onto these patches. To simulate the debris from collapsed buildings more closely, the debris patches are clustered together. All other patches contain smaller rubble, which the robot can move through.

The environment also contains an adjustable number of survivors, which are represented by green patches. These survivors are stationary, as they are stuck underneath the rubble. Each survivor gives off heat in the form of stimuli, which the robot can pick up on within its field of view. The stimuli are represented by turtles that move away from the corresponding survivor and die after a few ticks. The survivor itself can also be detected when within the robot’s field of view. Once the robot has reached a survivor, the user is notified and the simulation is stopped.

Red patches are used to simulate random noise and false positives. These red patches send out stimuli in the same way as the green patches do. The robot cannot distinguish these stimuli from one another, but it can identify red patches as false positives when they are in its field of view.

Design choices

The simulation cannot represent the whole scenario. Some design decisions were made where it may not be directly apparent as to why they were chosen. Therefore, these design choices are explained in detail here.

In the simulation, the survivor cannot be inside a patch that indicates a large chunk of debris. Intuitively this would mean that the survivor is stuck in small, light debris which the vine robot can move through. However, it is known that the target is stuck underneath rubble, as they would not need rescuing otherwise. As such, it is assumed that the weight of all the rubble underneath which the survivor is trapped prevents them from moving into another patch. The robot, however, is able to move between patches of small, light debris due to its lesser size. Whether the weight is caused by a single big piece of debris or a collection of smaller pieces of debris can be influential in getting the survivor out of the debris, but it is not important for finding them. Thus it is also not important for the simulation.

To properly represent the robot’s physical capabilities and limitations, the simulation environment requires a sense of scale. For simplicity, patches were given a dimension of 10 centimetres cubed and each tick represents 1 second. As a result, any patch that is not marked as a debris chunk patch has an opening that is larger than the robot’s main body tube with a diameter of 5 centimetres. For the robot’s mobility, it entails that a single patch can be traversed per tick and that sideways movement is limited to a 45-degree angle per tick.

One might argue that the robot can turn with a tighter radius when pressed up against a chunk of debris. In theory, this would function by trying to turn in a certain direction while pressing up against the debris. The forwards movement would then be blocked by the debris, while the sideways movement can continue unrestricted. However, this theory has a major downside. Deliberately applying pressure to a chunk of stationary debris could cause it, or the surrounding debris, to shift. Such a shift has unpredictable side effects, which could have disastrous consequences for survivors stuck in the debris.

Simulation Scenario

In active use the robot has to guide rescuers to possible locations of survivors in the environment. This scenario can be described in PEAS as:

  • Performance Measure: The performance measure for the vine robot in the simulation would be the number of survivors found within a given time frame.
  • Environment: The environment for the vine robot in the simulation would be a disaster zone with multiple clusters of debris. The debris can be of varying sizes and shapes and may obstruct the robot's path.
  • Actuators: The actuators for the vine robot in the simulation would be the various robotic mechanisms that enable the robot to move and interact with its environment.
  • Sensors: The sensors for the vine robot in the simulation would include a set of cameras and microphones to capture different stimuli. Additionally, the robot would also have sensors to detect obstacles in its path.


Environment description:

  • Partially observable
  • Stochastic
  • Collaborative
  • Single-agent
  • Dynamic
  • Sequential

Test Plan

The objective of the test plan is to evaluate the localization algorithm of the Vine Robot in a simulated environment with clusters of debris, random noise representing interference, and intermittent stimuli that indicate the presence of a survivor. The test aims to ensure the Vine Robot can locate a survivor in at least 75% of the randomly generated scenarios.

However, the following risks should be considered during the test:

  • The simulation may not accurately represent the real-world scenario.
  • The Vine Robot's pathfinding algorithm may not work as expected in the simulated environment.
  • The Vine Robot's limitations may not be accurately represented in

Test Scenarios:

Each of the following scenarios will be run 10 times, each round of 10 runs will have the parameter spawn-interval ranging from the minimum, median and maximum. Doing this gives the simulation more merit concerning the effect of stimuli on the tests. Aside from this, each scenario will have a certain percentage assigned to it. This is done for the final result, as not every scenario is as important to the simulation goal. Further explanation is given in the testing metrics.

Scenario 1 (20%): In this scenario, the Vine Robot will be placed in an environment with a survivor in a clear path, with no debris and with the maximum noise. The aim is to test the robot's pathfinding capability in a clear path.

Scenario 2 (20%): In this scenario, the Vine Robot will be placed in an environment with the maximum amount of debris and a survivor with no noise. The aim is to test the robot's ability to manoeuvrability through the debris and locate the survivor.

Scenario 3 (55%): In this scenario, the Vine Robot will be placed in an environment with the median input of clusters of debris and noise with one survivor. The aim is to test the robot's ability to navigate through the ideal environment and locate the survivor.

Scenario 4 (5%): In this scenario, the Vine Robot will be placed in an environment with the maximum random noise and maximum amount of cluster of debris. The aim is to test the robot's ability in the worst environment possible.

Test Procedure:

For each scenario, the following procedure will be followed:

  • The Vine Robot will be placed in the environment with the survivor and any other required parameters set for the scenario.
  • The Vine Robot's localization algorithm will be initiated, and the robot will navigate through the environment to locate the survivor.
  • The number of scenarios where the robot successfully locates the survivor will be recorded.

Test Metrics:

The results found will be used to evaluate the Vine Robot's localization algorithm. The percentage of scenarios where the Vine Robot successfully locates the survivor will be given a grade between 1-10. Then each grade will be multiplied by its percentage of importance. Allowing us to have a better representation of the 75% success rate for the Vine Robot, thus the aim is to get a 7.5 average grade.

Test results:

Below are some of the results shown for Scenario 3. The other results are in the appendix.

Scenario 3
lifespan of stimuli 50
Run iteration Success/Failure Reason for Failure / Blocks discovered Ticks
#1 Success 33/50 blocks discovered 509
#2 Failure Wall hit / 15/50 blocks discovered 228
#3 Failure Wall hit / 6/50 blocks discovered 74
#4 Success 1/50 blocks discovered 14
#5 Success 9/50 blocks discovered 108
#6 Failure Wall hit / 24/50 blocks discovered 167
#7 Success 10/50 blocks discovered 56
#8 Success 12/50 blocks discovered 149
#9 Failure Wall hit / 8/50 blocks discovered 51
#10 Failure Dead end / 8/50 blocks discovered 80
lifespan of stimuli 100
#1 Success 4/50 blocks discovered 11
#2 Success 19/50 blocks discovered 278
#3 Success 6
#4 Success 7/50 blocks discovered 94
#5 Success 3/50 blocks discovered 22
#6 Success 16/50 blocks discovered 105
#7 Failure Wall hit / 5/50 blocks discovereds 14
#8 Failure Dead end / 10/50 blocks discovered 106
#9 Success 8/50 blocks discovered 39
#10 Failure Wall hit / 3/50 blocks discovered 3
Average ticks 72


Test conclusion

The simulation algorithm used to model the vine robot's behavior in a disaster response scenario involved a combination of pathfinding and survivor detection algorithms. The pathfinding algorithm utilized a search-based approach to navigate through the debris clusters and identify potential paths for the robot to follow. The survivor detection algorithm relied on the robot's sensors to detect signs of life, such as heat. However, while the simulation algorithm provided a useful framework for testing the vine robot's capabilities, it also had several limitations that could have affected the accuracy of the simulation results. For example, the algorithm did not account for potential variations in the environment, such as the presence of potential dead ends . This led to inaccuracies in the robot's pathfinding decisions causing it to get stuck quite often.

Another potential shortcoming of the simulation algorithm was its reliance on pre-determined search strategies and decision-making processes. While the algorithm was designed to identify the most efficient paths through the debris clusters and locate survivors based on sensor data, it did not account for potential variations in the environment or unforeseen obstacles that could impact the robot's performance. This could have limited the algorithm's ability to adapt to changing conditions and make optimal decisions in real-time.

Finally, the simulation algorithm may have also been limited by the accuracy and precision of the sensor data used to model the robot's behavior we decided to simulate. While the algorithm tried emulating advanced sensor technology to detect survivors and navigate through the debris clusters, the accuracy and precision of these sensors could have been misaligned due to the limitations of Netlogo . This could have led to inaccuracies in the simulation results and limited the algorithm's ability to accurately model the vine robot's behavior in a real-world disaster response scenario.

Conclusion

Overall, the vine robot shows potential for urban search and rescue operations. From the user investigation, it was clear that the technology should be semi-autonomous and not too expensive regarding material and training. Furthermore, it was investigated that the vine robot could be equipped with a thermal imaging camera and a visual camera in order to localize survivors in rubble. A simulation was used to test a localization algorithm due to the time and budget constraints. However, a prototype of the vine robot could have given more exact insights into the challenges regarding the algorithm. The results of the simulation tests show that the success rate of the customized algorithm is not high enough for the algorithm to be accepted as the average grade is below 7.5. Therefore, the algorithm will need to be optimized and tested again before it can be used in real-life scenarios. The algorithm should include the potential variations of the environment. Further research, especially in partnership with the users, is necessary for the vine robot to be implemented in USAR operations.

Future work

First of all, the customized localization algorithm will need to be improved. The simulation as it is now is fully autonomous. However, the user investigation pointed out that it should be semi-autonomous since the current technology that is fully autonomous can not handle the chaotic environment.

Secondly, if the robot would be used semi-autonomously in rescue operations, USAR members should be trained. This training will learn members how to set up the device, how to read data from the robot, and how to control it if necessary. For instance, if the robot senses a stimulus that is not a human being, the human operator could steer the robot in another direction. It should be looked into how this training will look like, who will need it, and who will give it.

Thirdly, when the vine robot has found a survivor in rubble, it has to communicate in some way the exact location to the human operator. This will be done via the program that the operator uses to control it semi-autonomously.

Furthermore, it was found that the vine robot could be equipped with a thermal imagining camera and a visual camera. However, it was not investigated how these sensors could be mounted on the robot. Therefore, a detailed design and manufacturing plan should be made.

Lastly, it is not investigated how the robot can be retracted if used in rubble. Before it is deployed in USAR missions, it should be researched whether the robot can be retracted without damage to the sensors of the robot. In addition, it should be explored how many times the robot can be reused.

Overall, if the customized algorithm is found to pass the success rate, and the vine robot is equipped with the right sensors according to a design plan, the robot should be revised by USAR teams and the INSARAG for deployment in disaster sites.

References

Ackerman, E. (2023i). Boston Dynamics’ Spot Is Helping Chernobyl Move Towards Safe Decommissioning. IEEE Spectrum. https://spectrum.ieee.org/boston-dynamics-spot-chernobyl

Agarwal, T. (2019, July 25). Vibration Sensor: Working, Types and Applications. ElProCus - Electronic Projects for Engineering Students. https://www.elprocus.com/vibration-sensor-working-and-applications/#:~:text=The%20vibration%20sensor%20is%20also,changing%20to%20an%20electrical%20charge

Agarwal, T. (2019, August 16). Sound Sensor: Working, Pin Configuration and Its Applications. ElProCus - Electronic Projects for Engineering Students. https://www.elprocus.com/sound-sensor-working-and-its-applications/

Agarwal, T. (2022, May 23). Different Types of Motion Sensors And How They Work. ElProCus - Electronic Projects for Engineering Students. https://www.elprocus.com/working-of-different-types-of-motion-sensors/

AJLabs. (2023). Infographic: How big were the earthquakes in Turkey, Syria? Earthquakes News | Al Jazeera. https://www.aljazeera.com/news/2023/2/8/infographic-how-big-were-the-earthquakes-in-turkey-syria

Al-Naji, A., Perera, A. G., Mohammed, S. L., & Chahl, J. (2019). Life Signs Detector Using a Drone in Disaster Zones. Remote Sensing, 11(20), 2441. https://doi.org/10.3390/rs11202441

Ambe, Y., Yamamoto, T., Kojima, S., Takane, E., Tadakuma, K., Konyo, M., & Tadokoro, S. (2016). Use of active scope camera in the Kumamoto Earthquake to investigate collapsed houses. International Symposium on Safety, Security, and Rescue Robotics. https://doi.org/10.1109/ssrr.2016.7784272

Analytika (n.d.). High quality scientific equipment. https://www.analytika.gr/en/product-categories-46075/metrites-anichneftes-aerion-nh3-co-co2-ch4-hs-nox-46174/portable-voc-gas-detector-measure-range-0-10ppm-resolution-0-01ppm-73181_73181/#:~:text=VOC%20gas%20detector%20.-,Measure%20range%3A%200%2D10ppm%20.,Resolution%3A%200.01ppm

Anthes, Gary. Robots Gear Up for Disaster Response. Communications of the ACM (2010): 15, 16. Web. 10 Oct. 2012

Assembly. (n.d.). https://www.assemblymag.com/articles/85378-sensing-with-sound#:~:text=These%20sensors%20provide%20excellent%20long,waves%20cannot%20be%20accurately%20detected.

Auf der Maur, P., Djambazi, B., Haberthür, Y., Hörmann, P., Kübler, A., Lustenberger, M., Sigrist, S., Vigen, O., Förster, J., Achermann, F., Hampp, E., Katzschmann, R. K., & Siegwart, R. (2021). RoBoa: Construction and Evaluation of a Steerable Vine Robot for Search and Rescue Applications. 2021 IEEE 4th International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 2021, pp. 15-20, doi: 10.1109/RoboSoft51838.2021.9479192.

Baker Engineering and Risk Consultants. (n.d.). Gas Detection in Process Industry – The Practical Approach. Gas Detection in Process Industry – the Practical Approach. https://www.icheme.org/media/8567/xxv-poster-07.pdf

Bangalkar, Y. V., & Kharad, S. M. (2015). Review Paper on Search and Rescue Robot for Victims of Earthquake and Natural Calamities. International Journal on Recent and Innovation Trends in Computing and Communication, 3(4), 2037-2040.

Blines. (2023). Project USAR. Reddit. https://www.reddit.com/r/Urbansearchandrescue/comments/11lvoms/project_usar/

Blumenschein, L. H., Coad M. M., Haggerty D. A., Okamura A. M., & Hawkes E. W. (2020). Design, Modeling, Control, and Application of Everting Vine Robots. https://doi.org/10.3389/frobt.2020.548266

Boston Dynamics, Inc. (2019). Robotically negotiating stairs (Patent Nr. 11,548,151). Justia. https://patents.justia.com/patent/11548151

Camera Resolution and Range. (n.d.). https://www.flir.com/discover/marine/technologies/resolution/#:~:text=Most%20thermal%20cameras%20can%20see,640%20x%20480%20thermal%20resolution.

CO2 Meter. (2022, September 20). How to Measure Carbon Dioxide (CO2), Range, Accuracy, and Precision. https://www.co2meter.com/blogs/news/how-to-measure-carbon-dioxide#:~:text=This%200.01%25%20(100%20ppm),around%2050%20ppm%20(0.005%25)

Coad, M. M., Blumenschein, L. H., Cutler, S., Zepeda, J. A. R., Naclerio, N. D., El-Hussieny, H., ... & Okamura, A. M. (2019). Vine Robots. IEEE Robotics & Automation Magazine, 27(3), 120-132.

Coad, M. M., Blumenschein, L. H., Cutler, S., Zepeda, J. A. R., Naclerio, N. D., El-Hussieny, H., Mehmood, U., Ryu, J., Hawkes, E. W., & Okamura, A. M. (2020). Vine Robots: Design, Teleoperation, and Deployment for Navigation and Exploration. https://arxiv.org/pdf/1903.00069.pdf

Da Hu, Shuai Li, Junjie Chen, Vineet R. Kamat,Detecting, locating, and characterizing voids in disaster rubble for search and rescue, Advanced Engineering Informatics, Volume 42,2019,100974,ISSN 1474-0346, https://doi.org/10.1016/j.aei.2019.100974

De Cubber, G., Doroftei, D., Serrano, D., Chintamani, K., Sabino, R., & Ourevitch, S. (2013, October). The EU-ICARUS project: developing assistive robotic tools for search and rescue operations. In 2013 IEEE international symposium on safety, security, and rescue robotics (SSRR) (pp. 1-4). IEEE.

Delmerico, J., Mintchev, S., Giusti, A., Gromov, B., Melo, K., Horvat, T., Cadena, C., Hutter, M., Ijspeert, A., Floreano, D., Gambardella, L. M., Siegwart, R., & Scaramuzza, D. (2019). The current state and future outlook of rescue robotics. Journal of Field Robotics, 36(7), 1171–1191. https://doi.org/10.1002/rob.21887

Engineering, O. (2022a, October 14). Thermal Imaging Camera. https://www.omega.com/en-us/. https://www.omega.com/en-us/resources/thermal-imagers

F. Colas, S. Mahesh, F. Pomerleau, M. Liu and R. Siegwart, "3D path planning and execution for search and rescue ground robots," 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 2013, pp. 722-727, doi: 10.1109/IROS.2013.6696431. This paper presents a pathplanning system for a static 3D environment, with the use oflazy tensor voting.

Foxtrot841. (2023).Technology in SAR. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/

GPS.gov: Space Segment. (n.d.). https://www.gps.gov/systems/gps/space/

Hampson, M. (2022). Detecting Earthquake Victims Through Walls. IEEE Spectrum. https://spectrum.ieee.org/dopppler-radar-detects-breath

Hatazaki, K., Konyo, M., Isaki, K., Tadokoro, S., and Takemura, F. (2007)  Active scope camera for urban search and rescue IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 2007, pp. 2596-2602, doi: 10.1109/IROS.2007.4399386

Hensen, M. (2023, April 5). GPS Tracking Device Price List. Tracking System Direct. https://www.trackingsystemdirect.com/gps-tracking-device-price-list/

Huamanchahua, D., Aubert, K., Rivas, M., Guerrero, E. L., Kodaka, L., & Guevara, D. C. (2022). Land-Mobile Robots for Rescue and Search: A Technological and Systematic Review. 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). https://doi.org/10.1109/iemtronics55184.2022.9795829

Instruments, M. (2021, May 16). What are Gas Detectors and How are They Used in Various Industries? MRU Instruments - Emissions Analyzers. https://mru-instruments.com/what-are-gas-detectors-and-how-are-they-used-in-various-industries/

J. (n.d.-b). What is the difference between the Sound Level Sensor and the Sound Level Meter? - Technical Information Library. Technical Information Library. https://www.vernier.com/til/3486

Kamezaki, M., Ishii, H., Ishida, T., Seki, M., Ichiryu, K., Kobayashi, Y., Hashimoto, K., Sugano, S., Takanishi, A., Fujie, M. G., Hashimoto, S., & Yamakawa, H. (2016). Design of four-arm four-crawler disaster response robot OCTOPUS. International Conference on Robotics and Automation. https://doi.org/10.1109/icra.2016.7487447

Kawatsuma, S., Fukushima, M., & Okada, T. (2013). Emergency response by robots to Fukushima-Daiichi accident: summary and lessons learned. Journal of Field Robotics, 30(1), 44-63. doi: 10.1002/rob.21416

Kruijff, G. M., Kruijff-Korbayová, I., Keshavdas, S., Larochelle, B., Janíček, M., Colas, F., Liu, M., Pomerleau, F., Siegwart, R., N., Looije, R., Smets, N. J. J. M., Mioch, T., Van Diggelen, J., Pirri, F., Gianni, M., Ferri, F., Menna, M., Worst, R., . . . Hlaváč, V. (2014). Designing, developing, and deploying systems to support human–robot teams in disaster response. Advanced Robotics, 28(23), 1547–1570. https://doi.org/10.1080/01691864.2014.985335

Lee, S., Har, D., & Kum, D. (2016). Drone-Assisted Disaster Management: Finding Victims via Infrared Camera and Lidar Sensor Fusion. 2016 3rd Asia-Pacific World Congress on Computer Science and Engineering (APWC on CSE). https://doi.org/10.1109/apwc-on-cse.2016.025

Li, F., Hou, S., Bu, C., & Qu, B. (2022). Rescue Robots for the Urban Earthquake Environment. Disaster Medicine and Public Health Preparedness, 17. https://doi.org/10.1017/dmp.2022.98

Lindqvist, B., Karlsson, S., Koval, A., Tevetzidis, I., Haluška, J., Kanellakis, C., Agha-mohammadi, A. A., & Nikolakopoulos, G. (2022). Multimodality robotic systems: Integrated combined legged-aerial mobility for subterranean search-and-rescue. Robotics and Autonomous Systems, 154, 104134. https://doi.org/10.1016/j.robot.2022.104134

Liu, Y., Nejat, G. Robotic Urban Search and Rescue: A Survey from the Control Perspective. J Intell Robot Syst 72, 147–165 (2013). https://doi.org/10.1007/s10846-013-9822-x

Lyu, Y., Bai, L., Elhousni, M., & Huang, X. (2019). An Interactive LiDAR to Camera Calibration. ArXiv (Cornell University). https://doi.org/10.1109/hpec.2019.8916441

ManOfDiscovery. (2023).Technology in SAR. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/

Matsuno, F., Sato, N., Kon, K., Igarashi, H., Kimura, T., Murphy, R. (2014). Utilization of Robot Systems in Disaster Sites of the Great Eastern Japan Earthquake. In: Yoshida, K., Tadokoro, S. (eds) Field and Service Robotics. Springer Tracts in Advanced Robotics, vol 92. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40686-7_

Murphy, R. R. (2017). Disaster Robotics. Amsterdam University Press

Musikhaus Thomann, (n.d.). Behringer ECM8000. https://www.thomann.de/gr/behringer_ecm_8000.htm?glp=1&gclid=Cj0KCQjwxMmhBhDJARIsANFGOSsIyaQUtVOUtdpO6YneKcSOCqkvtbBp3neddsfc7hhylTgvQNprkJcaAvWmEALw_wcB

Nosirov, K. K., Shakhobiddinov, A. S., Arabboev, M., Begmatov, S., and Togaev, O.T. (2020) "Specially Designed Multi-Functional Search And Rescue Robot," Bulletin of TUIT: Managementand Communication Technologies: Vol. 2 , Article 1. https://doi.org/10.51348/tuitmct211

Osumi, H. (2014). Application of robot technologies to the disaster sites. Report of JSME Research Committee on the Great East Japan Earthquake Disaster, 58-74

Park, S., Oh, Y., & Hong, D. (2017). Disaster response and recovery from the perspective of robotics. International Journal of Precision Engineering and Manufacturing, 18(10), 1475–1482. https://doi.org/10.1007/s12541-017-0175-4

PCE Instruments UK: Test Instruments. (2023, April 9). - Sound Sensor | PCE Instruments. https://www.pce-instruments.com/english/control-systems/sensor/sound-sensor-kat_158575.htm

Raibert, M. H. (2000). Legged Robots That Balance. MIT Press

Sanfilippo F, Azpiazu J, Marafioti G, Transeth AA, Stavdahl Ø, Liljebäck P. Perception-Driven Obstacle-Aided Locomotion for Snake Robots: The State of the Art, Challenges and Possibilities †. Applied Sciences. 2017; 7(4):336. https://doi.org/10.3390/app7040336

Seitron. (n.d.). Portable gas detector with rechargeable battery. https://seitron.com/en/portable-gas-detector-with-rechargeable-battery.html

Shop - Wiseome Mini LiDAR Supplier. (n.d.). Wiseome Mini LiDAR Supplier. https://www.wiseomeinc.com/shop

Tadokoro, S. (Ed.). (2009). Rescue robotics: DDT project on robots and systems for urban search and rescue. Springer Science & Business Media.

Tai, Y.; Yu, T.-T. Using Smartphones to Locate Trapped Victims in Disasters. Sensors 2022, 22, 7502. https://doi.org/10.3390/ s22197502

Teledyne FLIR. (2016, April 12). Infrared Camera Accuracy and Uncertainty in Plain Language. https://www.flir.eu/discover/rd-science/infrared-camera-accuracy-and-uncertainty-in-plain-language/

Tenreiro Machado, J. A., & Silva, M. F. (2006). An Overview of Legged Robots

Texas Instruments. (n.d.). An Introduction to Automotive LIDAR. https://www.ti.com/lit/slyy150#:~:text=LIDAR%20and%20radar%20systems%20can,%3A%20%E2%80%A2%20Short%2Drange%20radar.

Thermal Zoom Cameras. (n.d.). InfraTec Thermography Knowledge. https://www.infratec.eu/thermography/service-support/glossary/thermal-zoom-cameras/#:~:text=Depending%20on%20the%20camera%20configuration,and%20aircraft%20beyond%2030%20km.

Uddin, Z., & Islam, M. (2016). Search and rescue system for alive human detection by semi-autonomous mobile rescue robot. 2016 International Conference on Innovations in Science, Engineering and Technology (ICISET). https://doi.org/10.1109/iciset.2016.7856489

Urban Search and Rescue Team. (2023). Update zaterdagmiddag. USAR. https://www.usar.nl/

Van Diggelen, F., & Enge, P. (2015). The World’s first GPS MOOC and Worldwide Laboratory using Smartphones. Proceedings of the 28th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2015), 361–369

VectorNav. (2023). Lidar - A measurement technique that uses light emitted from a sensor to measure the range to a target object. https://www.vectornav.com/applications/lidar-mapping#:~:text=LiDAR%20sensors%20are%20able%20to,sensing%20tool%20for%20mobile%20mapping

Wang, B. (2017, august). New inexpensive centimeter-accurate GPS system could transform mainstream applications | NextBigFuture.com. NextBigFuture.com. https://www.nextbigfuture.com/2015/05/new-inexpensive-centimeter-accurate-gps.html

WellHealthWorks (2022, December 11). Thermal Imaging Camera Price. https://wellhealthworks.com/thermal-imaging-camera-price-and-everything-you-need-to-know/#:~:text=Battery%20life%20of%2010%20hours

What is GPS and how does it work? (n.d.). https://novatel.com/support/knowledge-and-learning/what-is-gps-gnss

What is lidar? (n.d.). https://oceanservice.noaa.gov/facts/lidar.html

WinnerNot_aloser. (2023).Technology in SAR. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/

Zhao, L., Sun, G., Li, W., & Zhang, H. (2016). The design of telescopic universal joint for earthquake rescue robot. 2016 Asia-Pacific Conference on Intelligent Robot Systems (ACIRS). https://doi.org/10.1109/acirs.2016.7556189

Zepeda, J. A. R. (2022). The 1-minute vine robot. Vine Robots. https://www.vinerobots.org/build-one/the-simple-vine-robot/

Zook_Jo. (2023).Technology in SAR. Reddit. https://www.reddit.com/r/searchandrescue/comments/11bw8qh/technology_in_sar/

Zotomayor, C., (2021). This Vine Robot Is an Unstoppable Force With Tons of Applications. https://www.solidsmack.com/design/vine-robot/

Appendix

Project planning and deliverables

Week Milestones
Week 1 Brainstorm, topic, problem identification, planning, state-of-the-art literature research
Week 2 Further literature study, user analysis, MoSCoW, research for simulation possibility, research sesnors
Week 3 Further literature study, start simulation, research localization methods, research pathfinding methods, further research sensors, complete user analysis
Week 4 Work on simulation, further research localization and pathfinding methods, rewrite MoSCoW, research Vine robot specifications
Week 5 Work on simulation, rewrite simulation goal, further research Vine robot specifications, further research sensors
Week 6 Work on simulation, complete simulation specifications and test cases, complete sensor research
Week 7 Finalize simulation, gather results simulation
Week 8 Complete wiki and presentation

Who is doing what?

Names Tasks
Clinton Emok Sensors, localization and simulation
Richard Farla Simulation environment
Yash Israni Simulation algorithm
Tessa de Jong Literature research and USE
Kaj Scholer Sensors and localization
Pepijn Tennebroek Literature Research and USE

Weekly breakdowns

Name Total Breakdown week 1
Clinton Emok 3h Meeting (1h), literature research (1h), user definition(1h)
Richard Farla 5h Course introduction (1h), Brainstorm session (1h), meeting (1h), literature research (1h), milestones (1h)
Yash Israni 3h Meeting (1h), user requirements(1h), literature research (1h)
Tessa de Jong 5h Course introduction (1h), Brainstorm session (1h), meeting (1h), problem statement (1h), literature research (1h)
Kaj Scholer 5h Course introduction (1h), Brainstorm session (1h), meeting (1h), milestones (1h), literature research (1h)
Pepijn Tennebroek 5h Course introduction (1h), Brainstorm session (1h), meeting (1h), problem statement (1h), literature research (1h)
Name Total Breakdown week 2
Clinton Emok 6h Meeting 1 (2h), Meeting 2 (2h), Meeting 3 (1h), Localization (1h)
Richard Farla 6h Meeting 1 (2h), Meeting 2 (2h), Research equipment + how to build (2h)
Yash Israni 7h Meeting 1 (2h), Meeting 2 (2h), Meeting 3 (1h), Sensors (2h)
Tessa de Jong 7h Meeting 1 (2h), Meeting 2 (2h), Meeting 3 (1h), Literature research (2h)
Kaj Scholer 10.5h Meeting 1 (2h), Meeting 2 (2h), Meeting 3 (1h), CAD Modelling (2h), Storyboard (2.5h), Primary Reddit Research (1h)
Pepijn Tennebroek 7h Meeting 1 (2h), Meeting 2 (2h), Meeting 3 (1h), Fix literature (1h), Research Pathplanning (2h)
Name Total Breakdown week 3
Clinton Emok 7h Meeting 1 (2h), Meeting 2 (1h), Meeting 3 (1h), Writing paper (1h), Research(1h), State-of-the-art(1h)
Richard Farla 7h Meeting 1 (2h), Simulation (3h), Meeting 2 (1h), Meeting 3 (1h)
Yash Israni 6h Simulation (1h), Meeting 2 (1h), Meeting 3 (1h), Researching papers (3h)
Tessa de Jong 9h Meeting 1 (2h), Problem statement and Users (3h), Meeting 2 (1h), Meeting 3 (1h), Research (2h)
Kaj Scholer 8h Meeting 1 (2h), Meeting 2 (1h), Meeting 3 (1h), Fixing Wiki Structure/Content (1h), State-of-the-art (1h), Localization + Sensors (2h)
Pepijn Tennebroek 8h Problem statement and Users (3h), Meeting 2 (1h), Fix literature (3h) Meeting 3 (1h)
Name Total Breakdown week 4
Clinton Emok 5.75h Meeting 1 (2h), Localization/Sensors (3h), Meeting 2 (30min), Meeting 3 (15min)
Richard Farla 7.75h Meeting 1 (2h), Simulation (5h), Meeting 2 (30min), Meeting 3 (15min)
Yash Israni 5.75h Meeting 1 (2h), Simulation (3h), Meeting 2 (30min), Meeting 3 (15min)
Tessa de Jong 8.5h Meeting 1 (2h), State-of-the-art (3h), USE (3h), Meeting 2 (30min),
Kaj Scholer 7.75h Meeting 1 (2h), Meeting 2 (30min), Problem statement (2h), Specifications (3h), Meeting 3 (15min)
Pepijn Tennebroek 9.75h Meeting 1 (2h), State-of-the-art (4h), USE (3h), Meeting 2 (30min), Meeting 3 (15min)
Name Total Breakdown week 5
Clinton Emok 4.5h Meeting 2 (2h), Meeting 2.5 (0.5h), Specification Vine Robot (2h)
Richard Farla 7.5h Meeting 1 (1h 30min), Meeting 1.5 (1h), Simulation brainstorm (2h), Meeting 2 (2h), Process simulation feedback (30min), Meeting 2.5 (0.5h)
Yash Israni 6.5h Meeting 1.5 (1h), Meeting 2 (2h), Meeting 2.5 (0.5h), Simulation (3h)
Tessa de Jong 7h Meeting 1 (1h 30min), Reddit posts (30 min), Meeting 2 (2h), Rewriting Wiki (3h)
Kaj Scholer 8.75h Meeting 1 (1h 30min), Meeting 2 (2h), Sensor research (3h 30min), Spell Check + Wiki Edit (1h), Adjusting References (45min)
Pepijn Tennebroek 5.5h Meeting 1 (1h 30min), Meeting 2 (2h), Specifications (2h)
Name Total Breakdown week 6
Clinton Emok 12h Meeting 1 (2h), PEAS (2h), Simulation(4h), Meeting 2 (3.5h), Adding images (30 mins)
Richard Farla 13h Meeting 1 (2h), Simulation: target patches (4h), Meeting 2 (3.5h), Simulation: pathfinding (3.5h)
Yash Israni 5.5h Meeting 1 (2h), Meeting 2 (3.5h)
Tessa de Jong 9.5h Meeting 1 (1.5h), Specifications (2h), Rewriting Wiki (3h), Extra research (1h), Meeting 2 (2h)
Kaj Scholer 11h Meeting 1 (1h), Meeting 2 (2h), Sensor research (3h), References (1h), Path finding research (3h), Editing Wiki (1h)
Pepijn Tennebroek 6.5h Wiki research (1h), Meeting 2 (2h), Reading wiki (0.5h), Sensor research (3h)
Name Total Breakdown week 7
Clinton Emok 10h Meeting 1 (3h), Meeting 2 (3h), Executing test plan (4h)
Richard Farla 11.5h Meeting 1 (3h), Simulation specification/design choices (1h), Simulation: vision (1h), Meeting 2 (3h), Simulation: testing (0.5h), Presentation (3h)
Yash Israni 6h Meeting 1 (3h), Meeting 2 (3h)
Tessa de Jong 9h Meeting 1 (3h), Presentation (2h), Wiki (1h), Meeting 2 (3h)
Kaj Scholer 12h Meeting 1 (3h), Presentation Preparation (4h), Meeting 2 (3h), Sensors (2h)
Pepijn Tennebroek 8h Meeting 1 (3h), Presentation (2h), Meeting 2 (3h)
Name Total Breakdown week 8
Clinton Emok 6h Presentation (2h), Meeting 1 (2h), Test conclusion (1h), Miscellenaous(1h)
Richard Farla 6h Presentation (2h), Meeting 1 (2h), Simulation (2h)
Yash Israni 4h Presentation (2h), Meeting 1 (2h)
Tessa de Jong 8.5h Presentation (2h), Meeting 1 (2h), Conclusion and future work (2.5h), Introduction and Users (2h)
Kaj Scholer 7h Presentation (2h), Meeting 1 (2h), Vine Robot + Localization (3h)
Pepijn Tennebroek 9.5h Presentation (2h), Meeting 1 (2h), Conclusion and future work (2.5h), State-of-the-art (2h), Sensors (1h)


User Study 1

Query:

I am currently doing a project about technology being used in search and rescue situations. As a group, we have come up with some questions. These questions are here to give us an idea on areas which can be improved, such as the problems with current equipment. If you have contact with other SAR organizations, we would be grateful if you can help us get into contact with them, so that we can further our analysis. Thank you all in advance!

  • What type of sensors/technology are currently used to localize a survivor?
  • What are the main issues with the current equipment or what do they lack in?
  • What makes a rescue operation expensive?
  • What are some protocols for when a survivor cannot be rescued?

Results:

"Currently GPS devices are the most adopted tech used to localize injured parties. There are also a number of devices used by backcountry skiers including chips sewn into clothing that can be read when nearby if they’re buried in an avalanche. I’ve also heard of devices used that are similar to what structure fire uses if they don’t move for a given period of time, a loud alarm signal will sound.

Excluding training, what makes an operation expensive is the man-hours involved. And if there are air assets involved, cost can rise exponentially.

The primary issues with current equipment is often funding based. The few SAR operations that do receive routine funding are constantly having to pinch pennies.

Assuming precise location is known, protocols for when a survivor or body cannot be retrieved are incredibly situational and would primarily revolve around rescuer safety. Usually this takes the form of delayed rescue/recovery. It’s not as if people just throw in the towel." (ManOfDiscovery, 2023).


"If your talking wilderness interface SAR I’m a newbie so I can’t say much. But in terms of Urban SAR you’re talking well over a million dollars worth of equipment and training usually though grants will help with that.

With Urban Search there’s all different kinds of technology. Being a dept that gets UASI grants we have sonar type technology that can pinpoint sound coming from within a collapse. We’ll get choppers on scene providing light. We have thermal imagers and drones that also provide thermal coverage. Arc GIS for on scene operational coordination and to map the scene. Multi gas meters and radiation detectors. HazMat teams on standby. Technical rescue teams with different specialties. High angle rescue in a building collapse. That equipment isn’t cheap. 3 grand worth of individual kit for each tech team member plus I don’t know 50 thousand dollars worth of equipment. Trench rescue equipment maybe another 50 thousand. Collapse rescue teams have another 300 thousand dollars worth of equipment I’d assume. Wood for shoring, we have a truck that just has wood on it, so 200 grand for that. SCBA has to be on for confined spaces each of our ensembles is 5 grand and I’d guess we have have 60 ensembles department wide. Probably more. An air truck costs 150 grand maybe more. Training is extremely expensive full scale event can cost 50 thousand dollars high end. What makes these situation expensive typically is shear size of it and the amount of equipment and personnel required. If it’s a long enough scene we’ll have a fuel truck come out and refuel our rigs. In terms of victims not being able to be rescued. We try and rescue everyone (everyone gets recovered), the people who are easiest to rescue get rescued first but while that’s going on there’s a command post discussing how to rescue those who are harder to access. For urban sar useful technology would be I don’t know some kind of radiological imaging technology that could be deployed rapidly and get us a view of everything under a collapse. And have AI find us and map us to victims." (WinnerNot_aloser, 2023).


"If a victim has a cell phone tracking them is fairly easy. Multiple programs like what3words and sartopo allow search teams to locate a victim with a phone. Sattalites were massive for uptodate areial images.

Drones have massive applications in SAR.

I believe there is very little issue in current equipment other than cost, some equipment is very expensive for what it is.

Rescue OPS are expensive because it often requires extensive manpower, and the afformentioned equipment.

Being unable to rescue a victim isn't in our vocabulary, at worst we have to triage and put a victim on the backburner so to speak for a higher risk vicitm to be rescued. But until a rescue becomes a recovery, if there's a will there is a way.

Please forgive any spelling mistakes, working of a phone atm." (Zook_Jo, 2023).


"Part of what makes a rescue expensive - that few mention - is for operators to maintain currency in their respective roles.

I know that everywhere is different, and each location may have their own way of doing things, but where I am there are governing bodies that require core efficiencies to be maintained a certain number of hours per year.

Each type of rescue; USAR, BSAR, Flood rescue, vertical rescue et al, require that key elements are trained and tested regularly.

This requires an enormous amount of time and money.

As far as technology goes, each unit will be different.

Vertical will require that their vortex (or which ever system) is maintained

Flood will require boats and ark-angles, GPS, underwater cameras et al.

Our USAR and BSAR require FLIR systems, MPD, strokes and mules, countless items.

The list goes on.

It’s also worth noting that much of the time, a rescue will require multiple agencies. You may have police, Sherrifs, specialist paramedics, parks officers, fire et al. Each one if these with their own kit and technology.

If it becomes a multi day rescue then there is the potential for up to a dozen vehicles to be utilised during that time.

The costs add up." (Foxtrot841, 2023).

User Study 2

Query:

Hi, I am currently doing a project on the usage of robotics in urban search and rescue, specifically after earthquakes. In order to understand the needs of the users, we have some questions:

  • How do you localize survivors in rubble at the moment?
  • What type of technology do you use at the moment?
  • What type of sensors do those technologies use?
  • If there are any, what issues do you have with your current methods for localization?
  • Are there other things that you would like your equipment to improve on?
  • How do you feel about the usage of robotics in USAR?
  • What do we need to take into account when designing a robot used in USAR?

Results:

"USAR Search Spec here. In a collapse we are basically in 2 types of search. K9 and technical. We use live find dogs to search the areas to locate possible victims..... technology is nowhere near sophisticated enough to do what a dog does so robots are out. So lets move on to technical shall we.

In technical we have 2 categories. Looking and listening. For acoustic searches we use an array of sensors placed around the pile. We will call outbfor them to make some noise or bang on something then listen in. We can isolate and listen to each individual sensor. From there we will move the array and repeat the process. We can then start to triangulate based on intensity where we think a victim may be located. There are a couple companies that make these but the Del Sar has been the gold standard for a while.

So the other method is using cameras. Simplest terms is an articulating camera on an extendable boom that we can stick into void spaces to search. Usually DelSars and dogs will give us an idea of where to start. The newest versions of these cameras (Zistos search cam) has multiple heads we can put on with IR or 40x zoom and that helps a lot. There is also a newish camera that has a 360 camera on it (firstlook 360). That doesnt articulate manually. But we can stick it in a void and on the tablet look around.

As far as technologies we dont have thatbrobotics may solve..... remember in Prometheus. The Pups. They scanned the areas and were able to create a 3D rendering of the tunnels. Something like that would be amazing. It would help us look for void spaces or recreate the building based on what we see.

Another limitation we have with the cameras is the boom and the boom length. We can only go in about 12' and thats if we get straight back. There are fiber optic cameras but they dont have a way to control them at the head. So a tethered walking robot with legs and a camera would allow us to go much deeper without having to get the rescue guys out to start breaking concrete." (Blines, 2023).

Sensor Results

Range:

  • Thermal Imaging Camera: 15km (Thermal Zoom Cameras, n.d.)
  • Sound Sensor: 2.5m (Assembly, n.d.)
  • Gas Detector: 190m (Baker Engineering and Risk Consultants, n.d.)
  • LIDAR: 200m (Texas Instruments, n.d.)
  • GPS: 20000km (GPS.gov: Space Segment, n.d.)

Resolution:

  • Thermal Imaging Camera: This camera has a resolution of 640x480 (Camera Resolution and Range, n.d.)
  • Sound Sensor: The resolution of a sound sensor is 0.1dB (PCE Instruments UK: Test Instruments, 2023)
  • Gas Detector: The detector resolution is around 0.01 ppm (parts per million) (Analytika, n.d.)
  • LIDAR: This sensor has a resolution of 0.2° in vertical and 2° in horizontal (Lyu et al., 2019)
  • GPS: The GPS has a resolution of 86cm (Wang, 2017)

Accuracy:

  • Thermal Imaging Camera: The camera has a 2ºC or 2% margin of error as stated by Teledyne FLIR (2016)
  • Sound Sensor: The sound sensor has an accuracy of ± 3 dB (J., n.d.)
  • Gas Detector: The detector has an accuracy of around 50 ppm (parts per million) which is 0.005% (CO2 Meter, 2022)
  • LIDAR: This sensor has an accuracy of 10mm in range and for the mapping itself it has an accuracy of 1 cm for the x and y axis and 2 cm for the z axis (VectorNav, 2023)
  • GPS: The GPS has an accuracy of 4.9 meters (Van Diggelen & Enge, 2015)

Cost:

  • Thermal Imaging Camera: $200 (WellHealthWorks, 2022)
  • Gas Detector: $300 (Seitron, n.d.)
  • Sound Sensor: $35 (Thomann, n.d.)
  • LIDAR: $144 (Shop - Wiseome Mini LiDAR Supplier, n.d.)
  • GPS: $100 (Hensen, 2023)

Project documents

Test results

Simulation code

breed [stimuli stimulus]
breed [robots robot]
stimuli-own [lifespan]
globals [should-stop?]

to setup
  clear-all
  cluster-debris
  setup-robots
  setup-stimuli-patches
  set should-stop? false
  reset-ticks
end

to cluster-debris
  ask n-of cluster-count patches [
    set pcolor gray
    let frontier [self] of neighbors6 with [pcolor != gray]

    let cluster-size cluster-min-size + random (cluster-max-size - cluster-min-size + 1)
    let current-size 1

    loop [
      if current-size >= cluster-size or empty? frontier [stop]
      ask one-of frontier [
        set pcolor gray
        set frontier remove self frontier
        set frontier sentence frontier [self] of neighbors6 with [pcolor != gray]
      ]
      set current-size current-size + 1
    ]
  ]
end

to setup-robots
  create-robots 1 [
    set color yellow
    setxyz x-start y-start z-start
    set pitch 270
    set heading 0
    pen-down
  ]
end

to setup-stimuli-patches
  ; create some green patches where the vine robot receives "good" stimuli
  ask n-of survivors patches [
    set pcolor green
  ]
  ; create some red patches where the vine robot receives "negative" stimuli
  ask n-of random-stimuli patches [
    set pcolor red
  ]
end

to go
  if should-stop? [stop]
  spawn-turtle
  move-turtles
  tick
end

to spawn-turtle
;   spawn turtles randomly
  if ticks mod spawn-interval = 0 [
    let spawn-patches patches with [pcolor = red]
    if any? spawn-patches [
      let random-patch one-of spawn-patches
      create-stimuli 100 [
        set lifespan random 50 ; passable parameter for max range of lifespan
        set color yellow
        setxyz [pxcor] of random-patch [pycor] of random-patch [pzcor] of random-patch
        set pitch random 360
      ]
    ]
    ; Spawn stimuli from survivor patches
    ; When robot is within 5 blocks of goal patches change stimuli color. Robot tells patch
    let goal-patches patches with [pcolor = green]
    if any? goal-patches [
      let random-patch one-of goal-patches
      create-stimuli 10 [
        set lifespan random 50 ; passable parameter for max range of lifespan
        set color red
        setxyz [pxcor] of random-patch [pycor] of random-patch [pzcor] of random-patch
        set pitch random 360
      ]
    ]
  ]
end

to move-turtles
   ask robots [
    let target get-target-patch

    ;; stop if there are no possible moves
    if target = nobody [
      set should-stop? true
      user-message "The robot got stuck!"
      stop
    ]

    face target
    move-to target

    if [pcolor = green] of patch-here [
      set should-stop? true
      user-message (word "A survivor was found at " patch-here)
    ]
  ]

  ; move stimuli
  ask stimuli [
    let pre-move patch-here
    fd 2
    if distance pre-move != distance-nowrap pre-move [die] ;stimulus wrapped

    right random 20
    left random 20
    tilt-up random 20
    tilt-down random 20

    set lifespan lifespan - random 20 ; passable parameter for lifespan?
    death ;stimuli disappear after random amount of ticks
  ]
end

to death  ; turtle procedure (i.e. both wolf and sheep procedure)
  ; when energy dips below zero, die
  if lifespan < 0 [ die ]
end

;; Returns the patch towards which the robot should move, or nobody if there are no possible moves
to-report get-target-patch
  let view-radius 10
  let patches-in-view patches in-cone view-radius 179 with [distance-nowrap myself <= view-radius]
  let possible-targets patches in-cone 1.9 110 with [ ; patches towards which the robot can move
    pcolor != gray                 ; debris clusters
    and distance-nowrap myself > 0 ; current patch
    and distance-nowrap myself < 2 ; wrapping
  ]

  if any? patches-in-view with [pcolor = green] [
    ;; Try to move towards a green patch in view
    let target-in-view one-of patches-in-view with [pcolor = green]
    report one-of possible-targets with-min [distance target-in-view]
  ]

  ask patches-in-view with [pcolor = red] [set pcolor magenta]
  let stimuli-in-view stimuli in-cone view-radius 179 with [distance-nowrap myself <= view-radius]
  if any? stimuli-in-view [
    ;; Try to move towards the furthest stimulus in view
    let furthest-stimulus one-of stimuli-in-view with-max [distance myself]
    report one-of possible-targets with-min [distance furthest-stimulus]
  ]

  ;; TODO: informed choice in absence of stimuli
  report one-of possible-targets
end
@#$#@#$#@
GRAPHICS-WINDOW
0
0
620
621
-1
-1
12.0
1
10
1
1
1
0
1
1
1
-25
25
-25
25
-25
25
1
1
1
ticks
30.0

BUTTON
10
11
87
44
NIL
setup
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1

TEXTBOX
114
17
285
35
Debris cluster settings
16
0.0
1

SLIDER
114
51
286
84
cluster-count
cluster-count
0
500
250.0
1
1
NIL
HORIZONTAL

SLIDER
114
102
286
135
cluster-min-size
cluster-min-size
1
cluster-max-size
25.0
1
1
NIL
HORIZONTAL

SLIDER
114
151
286
184
cluster-max-size
cluster-max-size
cluster-min-size
100
50.0
1
1
NIL
HORIZONTAL

SLIDER
115
199
287
232
spawn-interval
spawn-interval
0
25
1.0
1
1
NIL
HORIZONTAL

SLIDER
116
241
289
274
survivors
survivors
0
10
1.0
1
1
NIL
HORIZONTAL

SLIDER
114
287
287
320
random-stimuli
random-stimuli
0
100
50.0
1
1
NIL
HORIZONTAL

BUTTON
12
60
88
94
NIL
go
T
1
T
OBSERVER
NIL
NIL
NIL
NIL
1

TEXTBOX
363
21
513
41
Robot start settings
16
0.0
1

INPUTBOX
336
56
525
116
x-start
0.0
1
0
Number

INPUTBOX
334
137
524
197
y-start
0.0
1
0
Number

INPUTBOX
333
208
523
268
z-start
25.0
1
0
Number

BUTTON
568
114
711
147
Follow Vine Robot
follow robot 0
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1

TEXTBOX
569
18
719
41
Perspective
16
0.0
1

BUTTON
568
58
711
91
Watch Vine Robot
watch robot 0
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1

BUTTON
569
163
712
196
Ride Vine Robot
ride robot 0
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1

BUTTON
570
214
716
247
Reset Perspective
reset-perspective
NIL
1
T
OBSERVER
NIL
NIL
NIL
NIL
1

@#$#@#$#@
@#$#@#$#@
default
true
0
Polygon -7500403 true true 150 5 40 250 150 205 260 250

airplane
true
0
Polygon -7500403 true true 150 0 135 15 120 60 120 105 15 165 15 195 120 180 135 240 105 270 120 285 150 270 180 285 210 270 165 240 180 180 285 195 285 165 180 105 180 60 165 15

arrow
true
0
Polygon -7500403 true true 150 0 0 150 105 150 105 293 195 293 195 150 300 150

box
false
0
Polygon -7500403 true true 150 285 285 225 285 75 150 135
Polygon -7500403 true true 150 135 15 75 150 15 285 75
Polygon -7500403 true true 15 75 15 225 150 285 150 135
Line -16777216 false 150 285 150 135
Line -16777216 false 150 135 15 75
Line -16777216 false 150 135 285 75

bug
true
0
Circle -7500403 true true 96 182 108
Circle -7500403 true true 110 127 80
Circle -7500403 true true 110 75 80
Line -7500403 true 150 100 80 30
Line -7500403 true 150 100 220 30

butterfly
true
0
Polygon -7500403 true true 150 165 209 199 225 225 225 255 195 270 165 255 150 240
Polygon -7500403 true true 150 165 89 198 75 225 75 255 105 270 135 255 150 240
Polygon -7500403 true true 139 148 100 105 55 90 25 90 10 105 10 135 25 180 40 195 85 194 139 163
Polygon -7500403 true true 162 150 200 105 245 90 275 90 290 105 290 135 275 180 260 195 215 195 162 165
Polygon -16777216 true false 150 255 135 225 120 150 135 120 150 105 165 120 180 150 165 225
Circle -16777216 true false 135 90 30
Line -16777216 false 150 105 195 60
Line -16777216 false 150 105 105 60

car
false
0
Polygon -7500403 true true 300 180 279 164 261 144 240 135 226 132 213 106 203 84 185 63 159 50 135 50 75 60 0 150 0 165 0 225 300 225 300 180
Circle -16777216 true false 180 180 90
Circle -16777216 true false 30 180 90
Polygon -16777216 true false 162 80 132 78 134 135 209 135 194 105 189 96 180 89
Circle -7500403 true true 47 195 58
Circle -7500403 true true 195 195 58

circle
false
0
Circle -7500403 true true 0 0 300

circle 2
false
0
Circle -7500403 true true 0 0 300
Circle -16777216 true false 30 30 240

cow
false
0
Polygon -7500403 true true 200 193 197 249 179 249 177 196 166 187 140 189 93 191 78 179 72 211 49 209 48 181 37 149 25 120 25 89 45 72 103 84 179 75 198 76 252 64 272 81 293 103 285 121 255 121 242 118 224 167
Polygon -7500403 true true 73 210 86 251 62 249 48 208
Polygon -7500403 true true 25 114 16 195 9 204 23 213 25 200 39 123

cylinder
false
0
Circle -7500403 true true 0 0 300

dot
false
0
Circle -7500403 true true 90 90 120

face happy
false
0
Circle -7500403 true true 8 8 285
Circle -16777216 true false 60 75 60
Circle -16777216 true false 180 75 60
Polygon -16777216 true false 150 255 90 239 62 213 47 191 67 179 90 203 109 218 150 225 192 218 210 203 227 181 251 194 236 217 212 240

face neutral
false
0
Circle -7500403 true true 8 7 285
Circle -16777216 true false 60 75 60
Circle -16777216 true false 180 75 60
Rectangle -16777216 true false 60 195 240 225

face sad
false
0
Circle -7500403 true true 8 8 285
Circle -16777216 true false 60 75 60
Circle -16777216 true false 180 75 60
Polygon -16777216 true false 150 168 90 184 62 210 47 232 67 244 90 220 109 205 150 198 192 205 210 220 227 242 251 229 236 206 212 183

fish
false
0
Polygon -1 true false 44 131 21 87 15 86 0 120 15 150 0 180 13 214 20 212 45 166
Polygon -1 true false 135 195 119 235 95 218 76 210 46 204 60 165
Polygon -1 true false 75 45 83 77 71 103 86 114 166 78 135 60
Polygon -7500403 true true 30 136 151 77 226 81 280 119 292 146 292 160 287 170 270 195 195 210 151 212 30 166
Circle -16777216 true false 215 106 30

flag
false
0
Rectangle -7500403 true true 60 15 75 300
Polygon -7500403 true true 90 150 270 90 90 30
Line -7500403 true 75 135 90 135
Line -7500403 true 75 45 90 45

flower
false
0
Polygon -10899396 true false 135 120 165 165 180 210 180 240 150 300 165 300 195 240 195 195 165 135
Circle -7500403 true true 85 132 38
Circle -7500403 true true 130 147 38
Circle -7500403 true true 192 85 38
Circle -7500403 true true 85 40 38
Circle -7500403 true true 177 40 38
Circle -7500403 true true 177 132 38
Circle -7500403 true true 70 85 38
Circle -7500403 true true 130 25 38
Circle -7500403 true true 96 51 108
Circle -16777216 true false 113 68 74
Polygon -10899396 true false 189 233 219 188 249 173 279 188 234 218
Polygon -10899396 true false 180 255 150 210 105 210 75 240 135 240

house
false
0
Rectangle -7500403 true true 45 120 255 285
Rectangle -16777216 true false 120 210 180 285
Polygon -7500403 true true 15 120 150 15 285 120
Line -16777216 false 30 120 270 120

leaf
false
0
Polygon -7500403 true true 150 210 135 195 120 210 60 210 30 195 60 180 60 165 15 135 30 120 15 105 40 104 45 90 60 90 90 105 105 120 120 120 105 60 120 60 135 30 150 15 165 30 180 60 195 60 180 120 195 120 210 105 240 90 255 90 263 104 285 105 270 120 285 135 240 165 240 180 270 195 240 210 180 210 165 195
Polygon -7500403 true true 135 195 135 240 120 255 105 255 105 285 135 285 165 240 165 195

line
true
0
Line -7500403 true 150 0 150 300

line half
true
0
Line -7500403 true 150 0 150 150

link
true
0
Line -7500403 true 150 0 150 300

link direction
true
0
Line -7500403 true 150 150 30 225
Line -7500403 true 150 150 270 225

pentagon
false
0
Polygon -7500403 true true 150 15 15 120 60 285 240 285 285 120

person
false
0
Circle -7500403 true true 110 5 80
Polygon -7500403 true true 105 90 120 195 90 285 105 300 135 300 150 225 165 300 195 300 210 285 180 195 195 90
Rectangle -7500403 true true 127 79 172 94
Polygon -7500403 true true 195 90 240 150 225 180 165 105
Polygon -7500403 true true 105 90 60 150 75 180 135 105

plant
false
0
Rectangle -7500403 true true 135 90 165 300
Polygon -7500403 true true 135 255 90 210 45 195 75 255 135 285
Polygon -7500403 true true 165 255 210 210 255 195 225 255 165 285
Polygon -7500403 true true 135 180 90 135 45 120 75 180 135 210
Polygon -7500403 true true 165 180 165 210 225 180 255 120 210 135
Polygon -7500403 true true 135 105 90 60 45 45 75 105 135 135
Polygon -7500403 true true 165 105 165 135 225 105 255 45 210 60
Polygon -7500403 true true 135 90 120 45 150 15 180 45 165 90

square
false
0
Rectangle -7500403 true true 30 30 270 270

square 2
false
0
Rectangle -7500403 true true 30 30 270 270
Rectangle -16777216 true false 60 60 240 240

star
false
0
Polygon -7500403 true true 151 1 185 108 298 108 207 175 242 282 151 216 59 282 94 175 3 108 116 108

target
false
0
Circle -7500403 true true 0 0 300
Circle -16777216 true false 30 30 240
Circle -7500403 true true 60 60 180
Circle -16777216 true false 90 90 120
Circle -7500403 true true 120 120 60

tree
false
0
Circle -7500403 true true 118 3 94
Rectangle -6459832 true false 120 195 180 300
Circle -7500403 true true 65 21 108
Circle -7500403 true true 116 41 127
Circle -7500403 true true 45 90 120
Circle -7500403 true true 104 74 152

triangle
false
0
Polygon -7500403 true true 150 30 15 255 285 255

triangle 2
false
0
Polygon -7500403 true true 150 30 15 255 285 255
Polygon -16777216 true false 151 99 225 223 75 224

truck
false
0
Rectangle -7500403 true true 4 45 195 187
Polygon -7500403 true true 296 193 296 150 259 134 244 104 208 104 207 194
Rectangle -1 true false 195 60 195 105
Polygon -16777216 true false 238 112 252 141 219 141 218 112
Circle -16777216 true false 234 174 42
Rectangle -7500403 true true 181 185 214 194
Circle -16777216 true false 144 174 42
Circle -16777216 true false 24 174 42
Circle -7500403 false true 24 174 42
Circle -7500403 false true 144 174 42
Circle -7500403 false true 234 174 42

turtle
true
0
Polygon -10899396 true false 215 204 240 233 246 254 228 266 215 252 193 210
Polygon -10899396 true false 195 90 225 75 245 75 260 89 269 108 261 124 240 105 225 105 210 105
Polygon -10899396 true false 105 90 75 75 55 75 40 89 31 108 39 124 60 105 75 105 90 105
Polygon -10899396 true false 132 85 134 64 107 51 108 17 150 2 192 18 192 52 169 65 172 87
Polygon -10899396 true false 85 204 60 233 54 254 72 266 85 252 107 210
Polygon -7500403 true true 119 75 179 75 209 101 224 135 220 225 175 261 128 261 81 224 74 135 88 99

wheel
false
0
Circle -7500403 true true 3 3 294
Circle -16777216 true false 30 30 240
Line -7500403 true 150 285 150 15
Line -7500403 true 15 150 285 150
Circle -7500403 true true 120 120 60
Line -7500403 true 216 40 79 269
Line -7500403 true 40 84 269 221
Line -7500403 true 40 216 269 79
Line -7500403 true 84 40 221 269

x
false
0
Polygon -7500403 true true 270 75 225 30 30 225 75 270
Polygon -7500403 true true 30 75 75 30 270 225 225 270
@#$#@#$#@
NetLogo 3D 6.3.0
@#$#@#$#@
@#$#@#$#@
@#$#@#$#@
@#$#@#$#@
@#$#@#$#@
default
0.0
-0.2 0 0.0 1.0
0.0 1 1.0 0.0
0.2 0 0.0 1.0
link direction
true
0
Line -7500403 true 150 150 90 180
Line -7500403 true 150 150 210 180
@#$#@#$#@
0
@#$#@#$#@