PRE2024 1 Group2

From Control Systems Technology Group
Jump to navigation Jump to search

Group Members

Name Student ID Department
Max van Aken 1859455 Applied Physics
Robert Arnhold 1847848 Mechanical Engineering
Tim Damen 1874810 Applied Physics
Ruben Otter 1810243 Computer Science
Raul Sanchez Flores 1844512 Computer Science / Applied Mathematics

Problem Statement

In modern warfare drones play a huge role. Drones are relatively cheap to make and deal a lot of harm, while not a lot is done against them. There exist large anti-drone systems which protect important areas from being attacked by drones. Individuals, which are at the front line, are not protected by such anti-drone systems as they are expensive and too large to carry around. This makes individuals at the front line vulnerable to drone attacks. We aim to show that an anti-drone system can be made cheap, lightweight and portable to protect these vulnerable individuals.

Objectives

To show that an anti-drone system can be made cheap, lightweight and portable we do the following things:

  • Explore and determine ethical implications of the portable device.
  • Determine how drones and projectiles can be detected.
  • Determine how a drone or projectile can be intercepted and/or redirected.
  • Build a prototype of this portable device.
  • Create a model for the interception
  • Prove the system’s utility.

Planning

Within the upcoming 8 weeks we will be working on this project. The table below shows when we aim to finish the different tasks within the 8 weeks of the project.

Week Task
1 Initial planning and setting up the project.
2 Literary research.
3 Create ethical framework.
Conduct an interview with an expert to confirm and construct the use cases.
Start constructing prototype and software.
Determine potential problems.
4 Continue constructing prototype and software
5 Finish prototype and software
6 Testing prototype to verify its effectiveness and use cases.
Evaluate testing results and make final changes.
7 Create final presentation.
8 Finish Wiki page.

Risk Evaluation

A risk evaluation matrix can be used to determine where the risks are within our project. This is based on two factors: the consequence if a task is not fulfilled and the likelihood that this happens. Both of these factors are rated on a scale from 1 to 5 and using the matrix below a final risk is determined. This can be a low, medium, high or critical risk. Knowing the risks beforehand gives the ability to prevent failures from occurring as it is known where special attention is required.

[1]

Risk evaluation matrix
Task Consequence (1-5) Likelihood (1-5) Risk
Collecting 25 articles for the SoTA 1 1 Low
Interviewing front line soldier 1 2 Low
Finding features for our system 4 1 Medium
Making a prototype 3 3 Medium
Make the wiki 5 1 Medium
Finding a detection method for drones and projectiles 4 1 Medium
Determine (ethical) method to intercept or redirect drones and projectiles 5 1 Medium
Prove the systems utility 5 2 High

Interviews

TO BE ADDED

Users and their Requirements

We currently have two main usages for this project in mind, which are the following:

  • Military forces facing threats from drones and projectiles.
  • Privately-managed critical infrastructure in areas at risk of drone-based attacks.

The users of the system will require the following:

  • Minimal maintenance
  • High reliability
  • System should not pose additional threat to surrounding
  • System must be personally portable
  • System should work reliably in dynamic, often extreme, environments
  • System should be scalable and interoperable in concept

Ethical Framework

Description

The goal of this project is to create an easily portable system that can be used as the last line of defense against incoming projectiles. In order to come up with a sufficient ethical framework this description needs to be more specified and categorized. The device under consideration is capable of neutralizing an incoming projectile. However, an incoming projectile doesn’t always have to be neutralized and this can differ between situations. Because the main purpose of this device is to be used in combat circumstances we will focus on this sort of scenario which is described as follow:

  1. This device could be used in war zone situations. For example if soldiers are in trenches it is hard for the enemy to hit them, therefore a solution that is nowadays used  in Ukraine is drones [???]. A war zone is in general a rapidly evolving environment so the soldiers and equipment need to be able to adapt to that [1]. In order to give certainty that the device will neutralize the harmful projectile there needs to be an extensive software framework which can distinguish for example birds from drones, but can also detect grenades dropped from higher altitude.

In this scenario the device will be actively used in a war zone. Therefore it should comply with the Geneva conventions. It should thus be able to examine the impact its actions would make for civilians and it must be certain it will not do harm to civilians directly or indirectly. To illustrate, if a kamikaze drone is heading towards a military vehicle, the device will either redirect the drone or let the drone explode above the ground based on the type of interception that is chosen. If the drone is redirected to not hit too close to the military vehicle, it might be that the kamikaze drone injures civilians. If the drone explodes above the ground, the fragments will still shoot downwards causing injuries over a larger area, possibly including civilians [???]. This not only violates the Geneva convention, but also misses the point of preventing harm.

In principle this can be seen as a trolley problem in disguise. On the one hand we have the soldiers who get killed if the system is not activated, but on the other hand we have the civilians who get killed if the system does get activated. In order to make sure this is not something that will happen the device must be able to choose a desired and achievable location to redirect an incoming drone towards. If this problem is seen from a utilitarian point of view we want to minimize harm to maximize happiness. In order to achieve this the triggering of a drone explosion in the air with proper warnings for the surrounding people might be a solution to achieve this maximization and further looked into in chapter????.

After all, we have to remember that the situation where there is not a harmless place to redirect the drone to is fairly specific and not at all an ‘everyday-problem’. Even stronger, this specific problem can be seen as a side effect which is expected to be far outweighed by the gains and benefits of the product.


[1] https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-101/jfq-101_78-83_Lynch.pdf?ver=Gu3iNHVHh5wYTbAPOqwd7Q%3d%3d

Is it ethical to deflect or redirect drones and projectiles?[2][3][4][5][6][7][8]

The goal in this project is to deflect or redirect drones and projectiles for soldiers to be safe. But by deflecting or redirecting these drones and projectiles they do not disappear. Chances are that other people get injured by these deflected or redirected drones and projectiles. So is it even ethical to deflect or redirect these drones or projectiles?


Lets start off by noting that this piece of technology is in no way meant to harm anyone, but instead to keep them safe. However, according to the International Humanitarian Law (IHL), article 49 specifically, an attack is seen as any act of violence, whether in offence or defense. This means that the deflection or redirection of drones and projectiles is seen as an attack. But since this piece of technology is used in warzones, attacks are really common there, so this should not be a big problem. By deflecting or redirecting an attack, it is simply a continuation of an already ongoing attack. This would simply lead to the conclusion that it is ethical to deflect or redirect drones and projectiles. However, deflection or redirection can also cause harm to other people on the defending side or even civilians. It is not a guarantee that the attack will be deflected or redirected to the attacking side. Now this piece of technology is comparable to autonomous cars. Both are made with the intention to protect people, but both also have the side effect that they can bring harm to people and they can make life or death decisions. Lots of research has gone into the ethics of autonomous cars and this information can be used to study the ethics of our product. This will be done by looking at different scenarios that could be encountered in war.


In a one on one situation in war, with one person from the attacking side and one from the defending side, our deflection or redirection of drones and projectiles is an ethical thing to do as mentioned above. You use an already existing attack and continue with it. Since it is a one on one situation there is no one else that can possibly get harmed.


If we expand our situation to a battlefield with more then one person on the attacking side and also more then one person on the defending side, there becomes the chance that the deflection or redirection of a drone or projectile hits a person from your own team or someone from the other team which did not send the attack. As for this second scenario this would not be a big problem. Since in a war you can attack those people who are fighting you. Even if this person did not send the drone or projectile toward you, he is in the attacking team and has intentions to fight you. Deflection or redirecting a drone or projectile towards a member of your team however does mean that you are attacking someone who is not attacking you, which is not allowed. This situation can be compared to an autonomous car which in the case of an accident tries to minimize the risks for the driver, by increasing risks for the passengers. While, in a perfect scenario, the car should minimize risks for everyone involved in the accident and it should not be biased toward one side or one person.

However not all risks can be resolved with autonomous cars, we still use them since they have the potential to reduce the amount of accidents, especially when technology advances. This is also the reason that the deflection or redirection of drones and projectiles may be done in this scenario, if we aim to further develop this technology to decrease potential harm and if we remove bias within the system.


A last expansion in our situation is a situation where there are also civilians nearby which can be harmed if a drone or projectile is deflected or redirected. In the case of autonomous cars this can be comparable to a situation where an autonomous car, in order to prevent a collision, hits a pedestrian who caused no threat to the driver (or person sitting in the car since the car is autonomous). Ethics on autonomous cars learns that it is not straightforward to apply this technology in this situation since it depends on how the technology is designed, used and regulated which requires a multidisciplinary approach.

Specifications

ID Requirement Preference Constraint Category Priority Testing Method
R1 Detect Rogue Drone Detection range of at least 30m No false negatives Hardware & Software M Simulate rogue drone scenarios in the field
R2 Object Detection 100% recognition accuracy Detects even small, fast-moving objects Software M Test with various object sizes and speeds in the lab
R3 Detect Drone Direction Accuracy of 90% Must account for evasive drone movements Hardware & Software M Use drones flying in random directions for validation
R4 Detect Drone Speed Accuracy within 5% of actual speed Must be effective up to 20m/s Hardware & Software M Measure speed detection in controlled drone flights
R5 Detect Projectile Speed Accurate speed detection for fast projectiles Must handle speeds above 10m/s Hardware & Software M Fire projectiles at varying speeds and record accuracy
R6 Detect Projectile Direction Accuracy within 5 degrees No significant deviation in direction detection Hardware & Software M Test with fast-moving objects in random directions
R7 Track Drone with Laser Tracks moving targets within a 1m² radius Must follow targets precisely within the boundary Hardware S Use a laser pointer to follow a flying drone in real-time
R8 Can Intercept Drone/Projectile Drone/Projectile is within the 1m² square Must not damage surroundings or pose threat Hardware C Test in a field, using projectiles and drones in motion
R9 Low Cost-to-Intercept Interception cost under $50 per event Hardware & Software S Compare operational cost per interception in trials
R10 Low Total Cost Less than $2000 Should include all components (detection + net) Hardware C Budget system components and assess affordability
R11 Portability System weighs less than 3kg Hardware C Test for total weight and ease of transport
R12 Easily Deployable Setup takes less than 5 minutes Must require no special tools or training Hardware C Timed assembly by users in various environments
R13 Quick Reload/Auto Reload Reload takes less than 30 seconds Must be easy to reload in the field Hardware C Measure time to reload net launcher in real-time scenarios
R14 Environmental Durability Operates in temperatures between -20°C and 50°C Must work reliably in rain, dust, and strong winds Hardware W Test in extreme weather conditions (wind, rain simulation)
Mass prediction 3.jpg
Mass prediction 1.png
Mass prediction 2.jpg
1

Detection

Drone Detection

The Need for Effective Drone Detection

With the rapid advancement and production of unmanned aerial vehicles (UAV), particularly small drones, new security challenges have emerged for the military sector.[9] Drones can be used for surveillance, smuggling, and launching explosive projectiles, posing threats to infrastructure and military operations.[9] Within our project we will be mostly looking at the threat of drones launching explosive projectiles. We have as an objective to develop a portable, last-line-of-defense system that can detect drones and intercept and/or redirect the projectiles they launch. An important aspect of such a system is the capability to reliably detect drones in real-time, while possibly in dynamic environments.[10] The challenge here is to create a solution that is not only effective but also lightweight, portable, and easy to deploy.

Approaches to Drone Detection

Numerous approaches have been explored in the field of drone detection, each with its own set of advantages and disadvantages.[11][10] The main methods include radar-based detection, radio frequency (RF) detection, acoustic-based detection, and vision-based detection.[9][11] It is essential for our project to analyze these methods within the context of portability and reliability, to identify the most suitable method, or combination of methods.

Radar-Based Detection

Radar-based systems are considered as one of the most reliable methods for detecting drones.[11] Radar systems transmit short electromagnetic waves that bounce off objects in the environment and return to the receiver, allowing the system to detect the object's attributes, such as range, velocity, and size of the object.[11][10] Radar is especially effective in detecting drones in all weather conditions and can operate over long ranges.[9][11] Radars, such as active pulse-Doppler radar, can track the movement of drones and distinguish them from other flying objects based on the Doppler shift caused by the motion of propellers (the micro-Doppler effect).[9][10][11]

Despite its effectiveness, radar-based detection systems come with certain limitations that must be considered. First, traditional radar systems are rather large and require significant power, making them less suitable for a portable defense system.[11] Additionally, radar can struggle to detect small drones flying at low altitudes due to their limited radar cross-section (RCS), particularly in cluttered environments like urban areas.[11] Millimeter-wave radar technology, which operates at high frequencies, offers a potential solution by providing better resolution for detecting small objects, but it is also more expensive and complex.[11][9]

Radio Frequency (RF)-Based Detection

Another common method is detecting drones through radio frequency (RF) analysis.[9][10][11][12] Most drones communicate with their operators via RF signals, using the 2.4 GHz and 5.8 GHz bands.[9][11] RF-based detection systems monitor the electromagnetic spectrum for these signals, allowing them to identify the presence of a drone and its controller on these RF bands.[11] One advantage of RF detection is that it does not require line-of-sight, implying that the detection system does not need to have a view of the drone.[11] It can also operate over long distances, making it effective in a large pool of scenarios.[11]

However, RF-based detection systems do have their limitations. They are unable to detect drones that do not rely on communication with another operator, as in autonomous drones.[10] Also, the systems are less reliable in environment where many RF signals are presents, such as cities.[11] Therefore in situations where high precision and reliability are a must, RF-based detection might not be too suitable.

Acoustic-Based Detection

Acoustic detection systems rely on the unique sound signature produced by drones, patricularly the noise generated by their propellers and motors.[11] These systems use highly sensitive microphones to capture these sounds and then analyze the audio signals to identify the presence of a drone.[11] The advantage of this type of detection is that it is rather low cost and also does not require line-of-sight, therefore this type of detection is mostly used for detecting drones behind obstacles in non-open spaces.[11][9]

However, it also has its disadvantages. In environments with a lot of noise, as in a battlefields, these systems are not as effective.[10][11] Additionally, some drones are designed to operate silently.[10] Also, they only work on rather short range, since sound weakens over distance.[11]

Vision-Based Detection

Vision-based detection systems use camera, either in the visible or infrared spectrum, to detect drones visually.[9][11] These system rely on image recognition algorithms, often by use of machine learning.[11][12] Drones are then detected based on their shape, size and movement.[12] The main advantage of this type of detection is that the operators themselves will be able to confirm the presence of a drone, and are able to differentiate between a drone and other objects such as birds.[11]

However, there are also disadvantages when it comes to vision-based detection systems.[10][11] These systems are highyl dependent on environmental conditions, they need a clear line-of-sight and good lightning, additionally weather conditions can have an impact on the accuracy of the systems.[10][11]

Best Approach for Portable Drone Detection

For our project, which focuses on a portable system, the ideal drone detection method must balance between effectiveness, portability and ease of deployment. Based on this, a sensor fusion approach appear to be the most appropriate.[11]

Sensor Fusion Approach

Given the limitations of each individual detection method, a sensor fusion approach, which would combine radar, RF, acoustic and vision-based sensors, offers the best chance of achieving reliable and accurate drone detection in a portable system.[11] Sensor fusion allows the strengths of each detection method to complement the weaknesses of the others, providing more effective detection in dynamic environments.[11]

  1. Radar Component: A compact, millimeter-wave radar system would provide reliable detection in different weather conditions and across long ranges.[10] While radar systems are traditionally bulky, recent advancements make it possible to develop portable radar units that can be used in a mobile systems.[11] These would most likely be less effective, therefore to compensate a sensor fusion approach would be used.[11]
  2. RF Component: Integrating an RF sensor will allow the system to detect drones communicating with remote operators.[11] This component is lightweight and relatively low-power, making it ideal for a portable system.[11]
  3. Acoustic Component: Adding acoustic sensors can help detect drones flying at low altitudes or behind obstacles, where rader may struggle.[9][11] Also this component is mainly just a microphone and the rest is dependent on software, and therefore also ideal for a portable system.[11]
  4. Vision-Based Component: A camera system equipped with machine learning algorithms for image recognition can provide visual confirmation of detected drones.[12][11] This component can be added by use of lightweight, wide-angle camera, which again does not restrict the device from being portable.[11]

Conclusion

To achieve portability in our system we have to restrict certain sensors and/or components, therefore to still achieve effectivity when it comes to drone detection, the best approach is sensor fusion. The system would integrate radar, RF, acoustic and vision-based detection. These together would compensate for each others limitations resulting in an effective, reliable and portable system.

Sensor Fusion

When it comes to detection, sensor fusion is essential for integrating inputs from multiple sensos types to achieve higher accuracy and reliability in dynamic conditions. Which in our case are radar, radio frequency, acoustic and vision-based. Sensor fusion can occur at different stages, with early and late fusion.[13]

Early fusion integrates raw data from various sensors at the initial stages, creating a unified dataset for procession. This approach captures the relation between different data sources, but does require extensive computational recourses, especially when the data of the different sensors are not of the same type, consider acoustic and visual data.[13]

Late fusion integrates the processed outputs/decisions of each sensor. This method allows each sensor the apply its own processing approach before the data is fused, making it more suitable for systems where sensor outputs vary in type. According to recent studies in UAV tracking, late fusion improves robustness by allowing each sensor to operate indepentently under its optimal conditions.[13][14]

Therefore, we can conclude that for our system late fusion is best suited.

Algorithms for Sensor Fusion in Drone Detection

  1. Extended Kalman Filter (EKF): EKF is widely used in sensor fusion for its ability to handle nonlinear data, making it suitable for tracking drones in real-time by predicting trajectory despite noisy inputs. EKF has proven effective for fusing data from radar and LiDAR, which is essential when estimating an object's location in complex settings like urban environments.[14]
  2. Convolutional Neural Networks (CNNs): Primarily used in vision-based detection, CNNs process visual data to recognize drones based on shape and movement. CNNs are particularly useful in late-fusion systems, where they can add a visual confirmation layer to radar or RF detections, enhancing overall reliability.[13][14]
  3. Bayesian Networks: These networks manage uncertainty by probabilistically combining sensor inputs. They are highly adaptable in scenarios with varied sensor reliability, such as combining RF and acoustic data, making them suitable for applications where conditions can impact certain sensors more than others.[13]
  4. Decision-Level Fusion with Voting Mechanisms: This algorithmic approach aggregates sensor outputs based on their agreement or “votes” regarding an object's presence. This simple yet robust method enhances detection accuracy by prioritizing consistent detections across sensors.[13]
  5. Deep Reinforcement Learning (DRL): DRL optimizes sensor fusion adaptively by learning from patterns in sensor data, making it particularly suited for applications requiring dynamic adjustments, like drone tracking in unpredictable environments. DRL has shown promise in managing fusion systems by balancing multiple inputs effectively in real-time applications.[13][14]

These algorithms have demonstrated efficacy across diverse sensor configurations in UAV applications. EKF and Bayesian networks are particularly valuable when fusing data from radar, RF, and acoustic sources, given their ability to manage noisy and uncertain data, while CNNs and voting mechanisms add reliability in vision-based and multi-sensor contexts. However, without testing no conclusions could be made on which algorithms can be applied effectively and which ones would work best.[13][14]

Interception

Protection against Projectiles Dropped by Drones

In modern warfare and security scenarios, small drones have emerged as accessible and effective tools capable of carrying payloads that can be dropped over critical areas or assets. Rather than intercepting the drones themselves—which would require continuous surveillance and potentially high-cost engagement—the focus shifts to intercepting projectiles they may release. By targeting these projectiles as a last line of defense, security systems can provide more cost-effective and situationally responsive solutions to neutralize potential threats only when necessary. This approach minimizes the resources spent on countering non-threatening drones while concentrating defensive efforts on imminent, high-risk projectiles.

Drone interception, in this context, refers to various methods used to incapacitate or destroy projectiles released from rogue drones once they have been detected and identified as threats. The rise of small drones, including both off-the-shelf and custom-built UAVs, has led to the development of many counter-projectile systems that aim to neutralize these payloads in either non-destructive or destructive manners. The choice of method depends on factors like the environment, threat level, and available technology.

Key Approaches to Interception

Kinetic Interceptors:

Kinetic methods involve the direct impact destruction or incapacitation of projectiles dropped by drones. These systems are designed for medium- to long-range engagement and include missile-based and projectile-based interception systems. For example, the Raytheon Coyote Block 2+ missile is a kinetic interceptor designed to counter small aerial threats, including drone projectiles. The Coyote’s loitering munition design allows it to engage moving targets with precision and agility. Originally developed as a UAV, the Coyote has been adapted for use as a missile system, with each unit costing approximately $24,000 per shot—a substantial investment for intercepting relatively inexpensive threats like drone-deployed projectiles[1]. The precision and effectiveness of kinetic systems like the Coyote make them particularly valuable for high-priority targets, despite the high operational cost.

Electronic Warfare (Jamming and Spoofing)

Electronic warfare techniques, such as radio frequency (RF) and GNSS jamming, can disrupt control signals for drones, potentially causing them to lose connectivity and drop their payloads prematurely, where other defenses can intercept them. Spoofing, on the other hand, involves hijacking the communication system of the drone to manipulate the projectile’s release. While jamming is non-lethal, it may affect other electronics nearby and is ineffective against autonomous drones that don’t rely on external signals. For example, DroneShield’s DroneGun MKIII is a portable jammer capable of disrupting RF and GNSS signals up to 500 meters away. By targeting a drone’s control signals, the DroneGun can cause the drone to lose connection, descend, or prematurely release its payload, which can then be intercepted by other defenses. However, RF jamming can interfere with nearby electronics, making it most suitable for use in remote or controlled environments to minimize collateral signal disruption. This system has demonstrated effectiveness in remote military applications and large open spaces where the risk of collateral interference is minimized[2][3].

Directed Energy Weapons (Lasers and Electromagnetic Pulses)

Directed energy systems like lasers and electromagnetic pulses (EMP) are designed to disable dropped projectiles by damaging electrical components or destroying them outright. Lasers provide precision and instant engagement with minimal collateral, although they are costly and susceptible to environmental conditions like rain or fog. EMP systems can disable multiple projectiles simultaneously but may interfere with other electronics in the vicinity. For example, Israel’s Iron Beam is a high-energy laser system developed by Rafael Advanced Defense Systems for intercepting aerial threats, including projectiles dropped by drones. Unlike kinetic interceptors, Iron Beam offers a lower-cost engagement per interception. This system uses concentrated laser energy to disable or destroy incoming threats with a high degree of precision, but its effectiveness can be impacted by environmental factors like fog or rain[4]. EMPs, on the other hand, provide a broad area effect, allowing simultaneous disabling of multiple projectiles. However, EMP systems may disrupt nearby electronics, limiting their use in civilian-populated areas[5].

Net-Based Capture Systems

Net-based systems use physical nets to ensnare projectiles mid-flight, rendering them incapable of reaching their target. Nets can be launched from ground platforms or other drones, effectively intercepting low-speed, low-altitude payloads. This method is non-lethal and minimizes collateral damage, though it has limitations in range and reloadability. For example, Fortem Technologies’ DroneHunter F700 is a specialized drone designed to intercept other drones or projectiles by deploying high-strength nets. The DroneHunter captures aerial threats, rendering them unable to complete their intended path, thus minimizing potential collateral damage. This system is particularly effective for low-speed, low-altitude interception in urban areas, where other interception methods might risk collateral damage. However, net-based systems have limitations in range and require reloading unless automated, which can slow response time during high-threat scenarios[6].

Geofencing

Geofencing involves creating virtual boundaries around sensitive areas using GPS coordinates. Drones equipped with geofencing technology are automatically programmed to avoid flying into restricted zones. This method is proactive, preventing any drone to even get close to any troops, but can be bypassed by modified or non-compliant drones. DJI, a major commercial drone manufacturer, has integrated geofencing technology into its drones. This feature prevents DJI drones from entering restricted areas, such as airports or military zones, and provides a non-lethal preventive measure. However, geofencing requires drone manufacturer cooperation, and modified or non-compliant drones can bypass these restrictions, making it unreliable in high-risk environments[7].

Objectives of Effective Drone Neutralization

When designing or selecting a drone interception system, several key objectives must be prioritised:

  1. Low Cost-to-Intercept: Interception costs are critical, as small, inexpensive drones can carry projectiles that may cost significantly less than the interception method itself. Low-cost systems, like net-based options, are preferred for frequent or lower-risk engagements. Conversely, more expensive kinetic methods may be necessary for high-speed or armored projectiles. Raytheon’s Coyote missiles exemplify the cost tradeoff of kinetic systems and highlight the economic considerations that come into play in military versus civilian contexts[1].
  2. Portability: Interception systems should ideally be lightweight, collapsible, and transportable across various settings. Portable systems, like the DroneGun MKIII jammer and net-based launchers, enable rapid setup and adaptability to various operational environments, making them valuable in mobile defense scenarios[2].
  3. Ease of Deployment: Quick deployment is essential in dynamic scenarios like military operations or large-scale events. For example, drone-based net systems and RF jammers mounted on mobile units offer flexible deployment options, allowing for rapid response in fast-moving situations[3].
  4. Quick Reloadability or Automatic Reloading: In high-threat environments, interception systems with rapid or automated reloading capabilities ensure continuous defense. Systems like lasers and RF jammers support quick re-engagement, while net throwers and kinetic projectiles may require manual reloading, potentially reducing efficiency in sustained threats[4].
  5. Minimal Collateral Damage: In urban or civilian areas, minimizing collateral damage is paramount. Non-lethal interception methods, such as jamming, spoofing, and net-based systems, provide effective solutions that neutralize threats without excessive environmental or infrastructural impact. Systems like the Fortem DroneHunter F700 illustrate the potential for non-destructive interception in urban areas[6].

Evaluation of Drone Interception Methods

Pros and Cons of Drone Interception Methods

  1. Jamming (RF/GNSS)
    • Pros: Jamming effectively disrupts communications between drones and operators, often forcing premature payload drops. Non-destructive and widely applicable, jamming can target multiple drones simultaneously, making it well-suited to civilian defense applications[3].
    • Cons: Jamming is limited in effectiveness against autonomous drones and can interfere with nearby electronics, posing a risk in urban areas where collateral electronic disruption can impact civilian infrastructure.​
  2. Net Throwers
    • Pros: Non-lethal and environmentally safe, nets can physically capture projectiles without destroying them, making them ideal for urban settings where collateral damage is a concern.
    • Cons: Effective primarily against slow-moving, low-altitude projectiles, net throwers require manual reloading between uses, which limits their response time during sustained threats unless automated.
  3. Missile Launch
    • Pros: High precision and range, missile systems like Raytheon's Coyote are effective for engaging fast-moving or long-range targets and are ideal in military settings for large-scale aerial threats[1].
    • Cons: High cost per missile, risk of collateral damage, and infrastructure requirements restrict missile use to defense zones rather than civilian settings.
  4. Lasers
    • Pros: Laser systems are silent, precise, and capable of engaging multiple targets without producing physical debris. This makes them valuable in urban environments where damage control is essential[4].
    • Cons: Lasers are costly, sensitive to environmental conditions like fog and rain, and have high energy demands that complicate portability, limiting their field application[5].
  5. Hijacking
    • Pros: Allows operators to take control of drones without destroying them. It’s a non-lethal approach, ideal for situations where it’s essential to capture the drone intact.
    • Cons: Hijacking poses collateral risks for surrounding electronics, has limited range, and is operationally complex in active field environments, requiring specialized training and equipment.​
  6. Spoofing
    • Pros: Non-destructively diverts drones from sensitive areas by manipulating signals, suitable for deterring drones from critical zones[3].
    • Cons: Technically complex and less effective against drones with advanced anti-spoofing technology, requiring specialized skills and equipment.
  7. Geofencing
    • Pros: Geofencing restricts compliant drones from entering sensitive zones, creating a non-lethal preventive barrier that offers permanent coverage[7].
    • Cons: Reliance on manufacturers for integration and potential circumvention by modified drones limits geofencing as a standalone defense measure in high-risk scenarios.

Pugh Matrix

The Pugh Matrix is a decision-making tool used to evaluate and compare multiple options against a set of criteria. By systematically scoring each option across various criteria, the Pugh Matrix helps to identify the most balanced or optimal solution based on the chosen priorities. In this report, I created a Pugh Matrix to assess different interception methods for countering projectiles dropped by drones.

Each method was evaluated across seven key criteria: Cost-to-Intercept, Portability, Ease of Deployment, Reloadability, Minimum Collateral Damage, and Effectiveness. For each criterion, the methods were scored as Low (1 point), Medium (2 points), or High (3 points), reflecting their relative strengths and limitations in each area. The scores were then totaled to provide an overall assessment of each method’s viability as a counter-projectile solution. This approach enables a comprehensive comparison, highlighting methods that provide a balanced combination of cost-efficiency, ease of use, and effectiveness in interception.

Method Minimal Cost-to-intercept Portability Ease of Deployment Reloadability Minimum Collateral Damage Effectiveness Total Score
Jamming (RF/GNSS) Medium High High High High Medium 10
Net Throwers High High High Medium High High 11
Missile Launch Low Low Medium Low Low High 5
Lasers High Medium Medium High High High 8
Hijacking High High Medium Low High Medium 8
Spoofing Medium High Medium Medium High Medium 8
Geofencing High High High High High Low 10

The resulting scores can be found in the Pugh Matrix below, where Net Throwers scored the highest with 11 points, indicating strong performance across several criteria, particularly in minimizing collateral damage and cost-effectiveness. Other methods, such as Jamming and Geofencing, also scored well, while missile-based solutions, despite high effectiveness, scored lower due to high costs and limited portability. Consequently, we will be focusing on Net Throwers as our main interception mechanism.

Path Prediction of Projectile[15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33]

Theory:

Catching a projectile requires different steps. At first the particle has to be detected, after which its trajectory has to be determined. If we know how the projectile is moving in space and time the net can be shot to catch the projectile. However based on the distance of the projectile it takes different amounts of time for the net to reach the projectile. In this time the projectile has moved to a different location. So the net must be shot to a position where the projectile will be in the future such that they collide.


Since projectiles do not make sound and do not emit RF waves, they are not as easy to detect as drones. For this part the assumption will be made that the projectile is visible. Making the system also detect projectiles which are not visible would probably be possible but this would complicate things a lot. The U.S. army uses electronic fields which can detect bullets passing. Something similar could be used to detect projectiles which are not visible, but this will not be done in this project due to the complexity.

To detect a projectile a camera with tracking software is used. Drones will have to be detected by this camera. This will be done by training an AI model for drone detection.

Now that the projectile is in sight its trajectory has to be determined. The speed of the projectile should only slow down due to friction in the air and speed up due to gravity. For a first model air friction can be neglected to get a good approximation of the flight of the projectile. Since not every projectile has the same amount of air resistance, the best friction coefficient should be found experimentally by dropping different projectiles. The best friction coefficient is when most projectiles are caught by the system. An improvement for this is to have different pre-assigned values for friction coefficients for different sizes of projectiles. Since surface area plays a big role in the amount of friction a projectile experiences, this is a reasonable thing to do.

With the expected path of the projectile known, the net can be launched to catch the projectile midair. Basic kinematics can give accurate results for these problems. Also, the problem can be seen as a 2D problem. Since we only protect against projectiles coming towards the system, we can always define a plane such that the trajectory of the projectile and the system are in the same plane, making the problem two dimensional. If the path of the projectile would exceed the plane and become a three dimensional problem the system does not need to protect against this projectile as it does not form a threat, since the system is in the (2D) plane.

Calculation:

PP1: Output 2D model

A Mathematica script is written which calculates at this angle the net should be shot. The script now makes use of made up values which have to be determined experimentally based on the hardware that is used. For example, the speed at which the net is shot should be tested and changed in the code to get a good result. The distance and speed of the projectile can be determined using sensors on the device. The output of the Mathematica script is shown in figure PP1. It gives the angle for the net to be shot at as well as a trajectory of the net and projectile to visualize how the interception will happen.



Accuracy:

PP2: Calculations accuracy

The height at which projectiles are dropped can be estimated by looking at footage of projectiles dropping. The height can be easily determined by assuming the projectile falls in vacuum, this represents reality really well. The height is given by: 0.5*g*t^2. Using a YouTube video[34] as data, it can be seen that almost every drop takes at least 4 seconds. This means that the projectiles are dropped from at least 78.5m. If we catch the projectile at two thirds of its fall, still having plenty of height to be redirected safely, and the net is 1 by 1 meter (so half a meter from its center to the closest side of the net), the projectile must not be dropped more than 0.75 meter (See figure PP2 for the calculation) next to us (outside of the plane) since the system would not catch this, if everything else went perfect. Even if the projectile would be dropped 0.7 meter next to the device, the net would hit the projectile with the side, which does not guarantee that the projectile will stay in the net.

An explosive projectile will do damage even when 0.75 meters away from a person. This means that the previously made assumption, where it was assumed that a 2D model would be fine, since everything happens in a plane, does not fulfill all our needs. Enemy drones are not accurate to the centimeter, since explosive projectiles, like grenades, can kill a person even when 10 meters away. This means that for better results a 3D model should be used.


3D model:

PP3: Output 3D model

It was tried to replicate the 2D model in 3D, but this did not work out with the same generality. For this reason some extra assumptions were made. These assumptions are based on reality and therefor still valid for this system. The only thing this changes is the generality of the model, where it could be used in multiple different cases instead of only projectiles dropped from drones.


In the 2D model the starting velocity of the projectile could be changed. However, in reality, drones do not have a launching mechanism and simply drop the projectile without any starting velocity. This means that the projectile will drop straight down (except some sideways movement due to external forces like wind). This was noted after watching lots of drone warfare video footage, where it was also noted that drones do not usually crash into people, but mainly into tanks since for tanks an accurate hit is required between the base and the turret of the tank. For people, drones drop the projectile from a standstill (to get the required aim). This simplification also makes the 2D model valid again, since there is no sideward movement in the projectile, it will never leave the plane and we can create between the path of the projectile and the mechanism which shoots the net.

Since this mechanism works in the real world (3D), it is decided to plot the 2D model at a required angle in 3D so there is a good representation of how the mechanism will work. The new model also gives the required shooting angle and then it shows the path of the net and projectile in 3D. To get further insight, the 2D trajectory of the net and projectile is also plotted, this can be seen in figure PP3.




Accuracy 3D model:

PP5: Interception with wind
PP4: Calculations weight net

The 3D model which is now set up only works in a “perfect” world, where there is no wind, no air resistance or any other external factors which may influence the path of the projectile and the net. Also we assume that the system knows where the drone is with infinite accuracy. This is in reality simply not true, but it is important to know how close this model replicates reality and if it can be used.

Wind plays a big role in the path of the projectile and of the net. It is important that the model also functions under these circumstances. In order to determine the acceleration of the projectile and net the drag force on both objects must be determined. Two important factors where the drag force depends on are the drag coefficient and the frontal area of the objects. Since different projectiles are used during warfare, like hand grenades or Molotov cocktails, it is unknown what the exact drag coefficient or frontal area is or the projectile. After a dive in literature it was decided to take an average value for the drag coefficient and the frontal area since these values lied on the same spectrum. For the frontal area this could be predicted since the drones are only able to carry objects of limited size.  After some calculations (see figure PP4) it was found that if the net (including the weights on the corners) weighs 3kg, the acceleration of the projectile and net due to wind effects is identical leading to still a perfect interception, which can be seen in figure PP5. This is based on literature values, for a later stage it is necessary to find the exact drag coefficient and surface area of the net and change the weight accordingly. As for projectiles which do not exactly satisfy the found drag coefficient or surface area, it is found with the use of the model that differences up to 50% of the used values do not influence the projectile so much that the interception misses. This range includes almost all theoretical values found for the different projectiles, making the model highly reliable under the influence of wind.

An uncertainty within the system is the exact location of the drone. We aim to accurately know where the drone, and thus the projectile is, but in reality this infinite accurate location is unachievable, but we can get close. The sensors in the system must be optimized to locate the drone as good as possible. Luckily there are sensors which are able to achieve high accuracy, for example a LiDAR sensor which has a range of 2000m and is accurate to 2cm. The 2000m range is well within the range that our system operates and the accuracy of 2cm is way smaller than the size of the net (100cm by 100cm) which should not cause problems for the interception.

Target Market Interviews

Interview 1: Introductory interview with F.W., Australian Military Officer Cadet (military as a potential user)

Q1: What types of anti-drone (/projectile protection) systems are currently used in the military?

  • A: The systems currently in use include the Trophy system from Israel, capable of intercepting anti-tank missiles and potentially intercepting dropped payload explosives or drones. Another similar system is the Iron Fist, also from Israel. Additionally, Anduril's Lattice system offers detection and identification capabilities within a specified operational area, capable of detecting all possible threats and, if necessary, intercepting them.

Q2: What are key features that make an anti-drone system effective in the field?

  • A: Effective systems are often deployed in open terrains like fields and roads, as drones have difficulty navigating dense forests. An effective anti-drone system could include a training or tactics manual to optimize use in these environments. Key features also include a comprehensive warning system that can alert troops on the ground to incoming drones, allowing them to take cover. Systems should not focus solely on interception but also on early detection.

Q3: What are the most common types of drones that these systems are designed to counter?

  • A: The systems are primarily designed to counter consumer drones and homemade FPV drones, which are known for their speed and accuracy. These drones are incredibly effective and easy to construct.

Q4: What are the most common types of drone interception that these systems employ?

  • A: The common types of interception include non-kinetic methods such as RF and optical laser jamming, which have a low cost-to-intercept. Kinetic methods include shooting with regular rounds, using net charges, or employing blast interception techniques such as those used in the Iron Fist system. High-cost methods like air-burst ammunition are also utilized due to their high interception likelihood.

Q5: Are there any specific examples of successful or failed anti-drone operations you could share?

  • A: No specific examples were shared during the interview.

Q6: How are drones/signals determined to be friendly or not?

  • A: The identification process was not detailed in the interview.

Q7: What are the most significant limitations of current anti-drone systems?

  • A: Significant limitations include the high cost, which makes these systems unaffordable for individual soldiers or small groups. Most systems are also complex and require vehicle mounts for transportation, making them less suitable for quick or discreet maneuvers.

Q8: Are there any specific environments where anti-drone systems struggle to perform well?

  • A: These systems typically perform well in large open areas but may struggle in environments with dense vegetation, which can offer natural cover for troops but limit the functionality of the systems.

Q9: Can you give a rough idea of the costs involved in deploying these systems?

  • A: The costs vary widely, and detailed pricing information is generally available on a need-to-know basis, making it difficult to provide specific figures without direct inquiries to manufacturers.

Q10: Which ethical concerns may be associated with the use of anti-drone systems, particularly regarding urban, civilian areas?

  • A: Ethical concerns are significant, especially regarding the deployment in civilian areas. The military undergoes extensive ethical training, and all new systems are evaluated for their ethical implications before deployment.

Q11: What improvements do you think are necessary to make anti-drone systems more effective?

  • A: The interview did not specify particular improvements but highlighted the need for systems that can be easily deployed and operated by individual soldiers.

Q12: Do you think AI or machine learning could help enhance anti-drone systems?

  • A: The potential for AI and machine learning to enhance these systems is recognized, with ongoing research into their application in anti-drone and anti-missile technology.

Q13: Is significant training required for personnel to effectively operate anti-drone systems?

  • A: The level of training required varies, but there is a trend towards developing systems that require minimal training, allowing them to be used effectively straight out of the box.

Q14: How do these systems usually handle multiple drone threats or swarm attacks?

  • A: Handling multiple threats involves a combination of detection, tracking, and engagement capabilities, which were not detailed in the interview.

Q15: How are these systems tested and validated before they are deployed in the field?

  • A: Systems undergo rigorous testing and validation processes, often conducted by military personnel to ensure effectiveness under various operational conditions.

Interview 2: Interview with B.D. to specify requirements for system, Volunteer on Ukrainian front lines (volunteers and front line soldiers as potential users)

What types of anti-drone systems are currently used near the front lines?

  • Mainly see RF jammers and basic radar systems deployed here
  • Detection priority
  • Nothing for specifically intercepting munitions dropped by drones
    • Follow-up: What are general ranges for these RF jammers and other detection methods?
      • Very different, RF detectors or radar can have ranges of 100 meters but very much depends on the conditions
      • Rarely available consistently across positions
      • Soldiers usually listen for drones but this can be difficult when moving or during incoming/outgoing fire
      • Any RELIABLE range would be beneficial but to give enough time to react a range around 20 or 30 meters would probably be the minimum

What are key features that make an anti-drone system effective in the field?

  • Needs to be easy to move and hide, no bulky shapes or protruding parts, light
    • No space in backpacks, cars, trucks, already often full of equipment. Nothing you couldn’t carry and run with (estimate: 30kg)
    • Pack-up and deployment speed: exfil usually over multiple hours of packing up for drone operator squads, but for front line troops, constant movement. Anything above 30 seconds is insufficient [and that seems to already by pushing it]
  • No language barrier advisable [assuming: basically meaning no training]
  • Detection range is quite critical. A few seconds need to be given for soldiers or others near the front lines to prepare for an incoming impact
    • Follow-up: What kind of alarm to show drone detection would be best?
      • [...] Conclusion: sound is only real option without immediately giving away position of system, drones will also often fly over positions so to alarm soldiers with lights would risk revealing their positions where the drone otherwise would have flown past. Sound already risky.

What are the most significant limitations of current anti-drone systems?

  • Apart from RF and some alternatives no real solution to drone drops on either side
  • Systems that are light enough to be carried by one person and simple enough to operate without extensive training are rare if any exist at all

General Discussion on Day-to-Day Experiences

  • Drone attacks very unpredictable. Periods of constant drone sounds above, then days of nothing. Constant vigilance necessary, which tires you out and puts you under serious pressure.
  • According to people he has talked to, RF jammers and RF/radar-based detection useful but very difficult to counter an FPV drone with a payload if it is able to reach you and get close to you, but very environment-dependent, e.g. open field vs forest
  • Despite the conditions, there's a shared sense of purpose between locals, volunteers, etc.
  • Constant fundraising needed to gather enough funds to fix vehicles (such as recently the van shown on IG account), as well as equipment and supplies both for locals and soldiers


Labeled System Overview

Mechanical Design

Introduction

To thoroughly detail the mechanical design and the design choices made throughout the design process, each component will be discussed individually. For each component, a brief description will be provided as to why this feature is necessary in the system, followed by a number of possible component design choices, and finally which design choice was chosen and why. To conclude, an overview of the system as a whole will be given along with a short evaluation and some next steps for improvements to the system beyond the scope of this project.

Kinetic Interception Charge (A)

Feature Description

Kinetic interception techniques include a wide range of drone and projectile interception strategies that, as the name suggests, contain a kinetic aspect, such as a shotgun shell or a flying object. This method of interception relies on a physical impact or near-impact knocking the target off course or disabling it entirely. The necessity for kinetic interception capabilities was made clean during the first target market interview, in which the possibility of jamming and (counter-)electronic warfare was discussed and the consequent necessity to be able to intercept drones and dropped projectiles in a non-electronic fashion.


Possible Design Choices

  • Shotgun shells
    • Using a mounted shotgun to kinetically intercept drones and projectiles is a possibility, proven also by the fact that shotguns are used as last lines of defense by troops at fixed positions as well as during transport [35]. Shotguns are also relatively widely available and the system could be designed to be adaptable to various shotgun types. However, the system would be very heavy and would need to be able to withstand the very powerful recoil of a shotgun firing.
  • Net charge
    • A net is a light-weight option which can be used to cover a relatively large area-of-effect consistently. This would make the projectile or drone much easier to hit, and thus contribute to the reliability of the system. The light weight would allow it to be rotated and fired faster than heavier options, as well as reducing the weight of any additional charges carried for the system, a critical factor mentioned in both the first and the second target market interviews.
  • Air-burst ammunition
    • This type of ammunition is designed to detonate in the air close to the incoming projectile or drone. This ammunition is very effective at preventing these hazards of reaching their targets, however very sharply increases the cost-to-intercept, a concept which was also previously introduced in the interviews. Furthermore, it is the most expensive out of all three of these options, which makes it less suitable for the application of this system.


Selected Design

The chosen kinetic interception charge comprises a light-weight net of a square surface area of 1m^2. The net has thin, weighted disks on the end, which are stacked on top of one another to form a cylindrical charge which can be loaded into the spring-loaded or pneumatically-loaded turret. This charge is then fired at incoming targets, and when done so, the net spreads out and catches the projectile where the path prediction algorithm calculates it will be by the time the net reaches it. This decision is based on the net's light weight, low cost-to-intercept, quick reloading capability, and reliability for the diversion of a projectile caught within its surface area.

Rotating Turret (B)

Feature Description

The rotating turret system is designed to provide a 360-degree rotation freedom for the attached net charge, allowing for the engagement of threats from any direction. Not just does the turret rotate 360 degrees in the horizontal plane, but also 90 degrees in the vertical plane, i.e. from completely horizontal to completely vertical. This capability is critical for obvious reasons, but especially in the context of fast-paced wars with ever-changing frontlines, such as the one in Ukraine, because of the risk of drone attacks coming from all around any given position. The versatility of a rotating turret enhances the system's ability to respond to these aerial threats significantly. Two stepper motors fitted with rotary encoders rapidly move the turret, one for each of the planes of rotation. The encoders serve to keep the electronics within the system aware of the current orientation of the net charge and to move it to the correct position once a threat has been detected.

Sensor Belt Top View

Sensor 'Belt' (C)

Feature Description

The sensor ring around the bottom of the turret contains slots for three cameras and microphones. Furthermore, it and the main body interior can be adapted to house further sensor components, such as radar, thermal imaging, RF detectors and other examples. To have a range of sensors fitted to the system plays an important role in ensuring threats of different types can be detected, as well as providing a critical advantage in the effectiveness of the system in various weather and other environmental conditions, such as the density of trees and foliage around the system, or whether there is fog or smoke in the vicinity.


Possible Design Choices

  • Sensors
    • Camera
      • Best choice for open areas, as well as any situation where there is a direct line of sight to the drone, and consequently the projectile it drops. Three cameras each covering 120 degrees of space would be combined to provide a view of the entire 360 degree environment. However, as soon as line of sight is broken, the cameras alone are insufficient to detect drones effectively.
    • Microphone
      • Fitting the system with microphones compliments the use of cameras effectively. Even when line of sight is broken, the microphone would be able to pick up the sound of the drone flying. This is done by actively performing frequency analysis on the audio recorded by the microphones and checking whether sound frequencies that are typically related to drones flying, including those given off by small motors which typically power the rotor blades of the drone. If these frequencies are significantly present, the microphone interprets the sound as the detection of a drone nearby. A shortcoming of the microphone is that it has a relatively small range, and will work less well with background noise, such as loud explosions or frequent firing.
    • Radar
      • While the range of radar sensors is typically very large, there is a significant limitation to this type of detection for small and fast FPV drones. The detectable 'radar cross-section' of the FPV drones is very small, and often radars are designed and calibrated to detect much larger objects, such as fighter jets or naval vessels. This means a specialized radar component would be required, which however would prove to be very expensive and likely difficult to source if a high-end option were necessary. However, some budget options are available and are discussed above this in the wiki. Finally, an additional advantage radar components could possibly provide is the detection of larger and slower surveillance drones flying at a higher altitude. To detect these and warn the soldiers of their presence would allow them to better prepare for incoming surveillance, and consequently also an influx of FPV drone attacks on their position if they are found.
    • RF Detector
      • The RF detector is a very useful device which senses the radio frequencies (RF) of drones, drone controllers, and other electronic equipment. Analyzing these frequencies for those typically used by drones to communicate with their pilots, they can quickly be detected if they are in the vicinity. In theory, this could also be used to block that signal and try to prevent the controlling of the drone near the system.
  • Sensor implementation
    • Sensors implemented directly onto the moving turret
      • This option fixes the sensors relative to the turret and the horizontal orientation of the net charge. This would mean that the turret would rotate until the cameras detect the drone at a predetermined 'angle zero', which aligns with where the turret points.
    • Ring/Belt around the central turret
      • With a fixed ring around a moving turret, the cameras (and other sensors) and the turret are not fixed relative to one another. Instead, the sensors are fixed with respect to the rest of the system, and thus, with the ground/environment


Selected Design

Having the sensors directly implemented into the turret has the significant downside that the cameras would be moving during the process of aiming the turret. This means that rather than the cameras and other sensors being able to provide a constant feedback for a stationary drone, their output would change as the turret rotates to aim towards the drone. Furthermore, the image received by the cameras would likely be blurred during this process of rotation, decreasing the chance of accurate detection and tracking. Therefore, the fixed ring of sensors was chosen. Furthermore, while a radar sensor provides long-range detection benefits, the most critical sensors for FPV detection would consist of the remaining three: camera, microphone, RF detector. These would therefore be the primary choices for a minimum-cost system.

Main Body (D)

Feature Description

The main body simply consists of a short cylinder with diameter 275mm. This body houses the main electronics of the system and is made of 3D-printed PETG material, with the possibility of adding an additional aluminium frame for additional stability, with the expense of additional weight. The main body features a removable bottom (twist and pull out lid) to provide access to the electronics in case maintenance is necessary. Its size has been limited to allow for easier transport, as well as complex parts having been avoided to allow for easy replacement and quick understanding of the system, helping to avoid the need for additional training. The bottom lid also contains a bracket which allows the upper system to slide and fix onto the tripod rail below in a quick and simple manner. This provides the opportunity for fast attachment and detachment and allow the system to be quickly dismantled if necessary, another important factor discussed in the aforementioned target market interviews.

Possible Tripod Choice

Tripod and Tripod Connector (E and F)

Feature Description

The base of the system, as well as a connecting element between the base and the rest of the system, had a number of requirements from target users. Firstly, the system should be easy to dismantle in case positions need to be changed quickly. Secondly, it should be easily transportable, with considerations for not just the size but also the weight of the components. Finally, the base should be able to accommodate various environments, meaning various ground conditions.


Possible Design Choices

  • Base
    • Fixed legs (like a chair or table)
      • While fixed legs provide the opportunity to add additional stability to the system through reinforced materials in the legs of the system, these cannot be adapted for different conditions on the ground. For example, if uneven ground is present, the system would not be able to correct for this. Furthermore, solid legs for additional support would significantly increase the weight of the system. Lastly, if the legs are of a fixed shape, they cannot be folded and packed away into a smaller volume, making transportation more difficult.
    • Collapsible tripod
      • A collapsible tripod has a number of advantages and disadvantages. A first disadvantage over fixed legs is the reduction in strength, further exacerbated by the collapsible joints of the legs likely weaking the structure relative to their fixed counterparts. However, the adaptability of a tripod to uneven ground and other potentially difficult ground conditions make it very useful. It is also much easier to transport given the possibility to reduce the bounding volume the base of the system would make up during transport.
  • Base-to-system connection
    • Quick-release mechanism
      • While contributing to a fast disassembly through the rapid process of quick-release mechanisms opening and closing, these mechanisms can often be more complex than necessary, essentially trading a fast mechanism for a slightly more elaborate build, involving multiple moving parts, springs, and so on. This increases the risk of components getting stuck during use, especially in muddy, wet or otherwise interfering environments.
    • Rail-slide
      • The rail connection is a simple solution which balances well the requirement for a simple mechanism that minimizes the risk of jamming or malfunction with the requirement for a fast connection and disconnection when necessary. It requires no tools, nor any additional training to use.
    • Screws
      • Most stable option but definitely not viable considering the need to be able to rapidly assemble and disassemble the system in active combat zones. Furthermore, this would require users to constantly carry around a screwdriver, and if it gets lost, there would be some significant issues with disassembly.


Selected Design

The rail-slide mechanism was selected due to its suitable balance between a simple and robust design with a mechanism that will allow for fast deployment and disassembly in situations where every second counts. With this connection mechanism, as well as the use of the highly adaptable tripod design for the base, allow the system to be deployed quickly in a wide range of ground conditions without requiring training or time investments into setting up and dismantling the system. Both design choices also make sure that transportation is as easy as possible.

Prototype

Components Possibilities

Radar Component (Millimeter-Wave Radar):

Component: Infineon BGT60ATR12C

Component: RFBeam K-LC7 Doppler Radar

  • Price: Around €55
  • Description: A Doppler radar module operating at 24 GHz, designed for short to medium range object detection. It’s used in UAV tracking due to its cost-efficiency and low power consumption.
  • Software: Arduino IDE or MATLAB can be used for basic radar signal processing.

RF Component:

Component: LimeSDR Mini

  • Price: Not deliverable at the moment
  • Description: A compact, full-duplex SDR platform supporting frequencies from 10 MHz to 3.5 GHz, useful for RF-based drone detection.
  • Software: LimeSuite, a suite of software for controlling LimeSDR hardware and custom signal processing.

Component: RTL-SDR V3 (Software-Defined Radio)

  • Price: Around €30-40
  • Description: An affordable USB-based SDR receiver capable of monitoring a wide frequency range (500 kHz to 1.75 GHz), including popular drone communication bands (2.4 GHz and 5.8 GHz). While not as advanced as higher-end SDRs, it’s widely used in hobbyist RF applications.
  • Software: GNU Radio or SDR# (SDRSharp), both of which are open-source platforms for signal demodulation and analysis.

Acoustic Component:

Component: Adafruit I2S MEMS Microphone (SPH0645LM4H-B)

  • Price: Around €7-10
  • Description: A low-cost MEMS microphone offering high sensitivity, commonly used in sound detection projects for its clarity and noise rejection.
  • Software: Arduino IDE or Python with SciPy for sound signature recognition.
  • Website: https://www.adafruit.com/product/3421

Component: DFRobot Ferminion MEMS Microphone Module - S15OT421(Breakout)

  • Price: Around €4
  • Description: A low-cost MEMS microphone offering high sensitivity, commonly used in sound detection projects for its clarity and noise rejection.
  • Software: Arduino IDE.
  • Website: https://www.dfrobot.com/product-2357.html

Vision-Based Component:

Component: Arducam 12MP Camera (Visible Light)

Component: Raspberry Pi Camera Module v2

  • Price: Around €15-20
  • Description: A small, lightweight 8MP camera compatible with Raspberry Pi, offering high resolution and ease of use for vision-based drone detection. It can be paired with machine learning algorithms for object detection.
  • Software: OpenCV with Python for real-time image processing and detection, or TensorFlow for more advanced machine learning applications.
  • Website: https://www.raspberrypi.com/products/camera-module-v2/

Component: ESP32-CAM Module

Sensor Fusion and Central Control

Component: Raspberry Pi 4

  • Price: Around €35
  • Description: A small, affordable single-board computer that can handle sensor fusion, control systems, and real-time data processing.
  • Software: ROS (Robot Operating System) or MQTT for managing communications between the sensors and the central processing unit.
  • Website: https://www.raspberrypi.com/products/raspberry-pi-4-model-b/

Component: ESP32 Development Board

  • Price: Around €10-15
  • Description: A highly versatile, Wi-Fi and Bluetooth-enabled microcontroller. It’s lightweight and low-power, making it ideal for sensor fusion in portable devices.
  • Software: Arduino IDE or Micropython, with MQTT for sensor data transmission and real-time control.
  • Website: https://www.espressif.com/en/products/devkits

Testing of Prototype components

To verify whether or not we have achieved some of these requirements, we have to devise some test scenarios, which will allow us to quantitatively determine our prototype's accuracy. The notion of accuracy of course may be ambiguous, as we do not have access to test our prototype in a warzone-like environment, and thus accuracy in a lab may not result in accuracy in the trenches. However, we will attempt to simulate such an environment through the use of a projector, as well as through the use of speakers.

Requirements verification

In this section we will show a step wise procedure to test if certain components will meet the requirements.

  • R1: Detect Rogue Drone
    • Objective: Verify that the system can detect a rogue drone within a specific detection range, minimizing false negatives. Setup Description: 1. Use a single drone labeled as “rogue.” Start by positioning the drone at a distance of 5 meters from the detection system, and incrementally increase the distance by 2 meters until the maximum possible distance within the room or 30 meters, whichever is smaller. 2. The room should be marked at each meter interval so that the drone can be placed accurately at each distance point. Test Procedure: 1.) At each distance interval, power on the drone and ensure it is hovering stably. The system should attempt to detect the drone’s presence at each point. 2.) Record whether the detection system correctly identifies the rogue drone's presence at each interval. Measurement Method: 1). Detection Confirmation: The system should provide a visible or audible signal (like an LED indicator, a beep, or a message on a screen) upon detecting the drone. 2.) Record Detection: Log each detection attempt, noting the distance at which the detection occurred successfully or failed. Quantitative Criteria: - Pass: The system should detect the rogue drone at all distances up to 30 meters (or the room’s maximum achievable distance) without any missed detections. - Fail: If the system fails to detect the drone at any distance within this range, it does not meet the requirement.
  • R4: Detect Drone Speed
    • Objective: Validate that the system can measure the drone’s speed accurately within an enclosed space. Setup Description: 1.) Mark two points on the floor, exactly 5 meters apart, to serve as start and end points for the drone to travel in a straight line. 2.) Program the drone to fly between these points at different speeds (e.g., 5 m/s, 10 m/s, 15 m/s) if supported by the drone’s settings. 3.) Position a high-speed camera or timer-based tool to measure the time taken for the drone to travel between the start and end points. Test Procedure: 1.) Set the drone to fly from the starting point to the end point at each speed setting (5 m/s, 10 m/s, 15 m/s). 2.) Use a stopwatch or high-speed camera to record the time taken for the drone to cross the 5-meter distance for each speed trial. 3.) Calculate the actual speed based on the time and distance traveled. Measurement Method: 1.) Manual Speed Calculation: Calculate the actual speed using the formula \( \text{Speed} = \frac{\text{Distance}}{\text{Time}} \) based on the measured time across the 5-meter distance. 2.) Compare with System Detection: Log the detected speed from the system and compare it with the manually calculated speed. Quantitative Criteria: - Pass: The system’s detected speed should be within ±5% of the manually calculated speed for each test. - Fail: If the detected speed deviates more than ±5% from the calculated speed in more than one trial, the requirement is not met.
  • R7: Track Drone with Laser
    • Objective: Test if the system’s laser can track the drone accurately as it moves within a 1m² radius. Setup Description: 1.) Mark a 1m² square area on the floor, with a 1m x 1m boundary. 2.) Place the drone within this marked area and set it to fly in circular, random, or figure-eight patterns within the boundary, keeping it stable and at a consistent height. 3.) Ensure the laser tracking system is positioned to follow the drone’s movements within this confined area. Test Procedure: 1.) Start the drone’s movement within the 1m² square and activate the laser tracking system. 2.) Observe the laser’s movement in real-time, ensuring it follows the drone as it stays within the boundaries of the 1m² area. Measurement Method: 1.) Visual Observation: Record the laser's accuracy with a high-speed camera to capture any deviations outside the 1m² boundary. 2.) Boundary Tracking Analysis: Playback the recorded footage and analyze if the laser consistently stays within the 1m² boundary around the drone. Quantitative Criteria: - Pass: The laser must remain within the 1m² boundary around the drone for at least 95% of the movement time during the test. - Fail: If the laser tracking drifts outside the boundary for more than 5% of the time, the system does not meet the requirement.
  • R8: Can Intercept Drone/Projectile
    • Objective: Verify the system’s ability to intercept the drone within a 1m² area without causing any unintended impact outside the target zone. Setup Description: 1.) Define a 1m² target area on the floor, marking it clearly. 2.) Position the drone in the center of this area and program it to hover or move slowly within the 1m² space. 3.) Configure the interception mechanism (net launcher, tagging device, etc.) to target the drone within this specified area. Test Procedure: 1.) Once the drone is positioned within the 1m² area, initiate the interception mechanism. 2). Repeat the interception test at different points within the 1m² target area to ensure consistency. Measurement Method: 1.) Interception Accuracy: Use a high-speed camera to capture the interception action, confirming that it occurs entirely within the designated 1m² area. 2.) Surrounding Impact Analysis: Review the footage to ensure that the interception mechanism does not affect any area outside the 1m² boundary. Quantitative Criteria: - Pass: The interception must occur within the 1m² target area with no impacts or interference outside the boundary for at least 95% of trials. - Fail: If the interception strays outside the target area or impacts surroundings in more than 5% of cases, the system does not meet the requirement.

Component testing process

In this section we will show and explain how the microphone and the camera are able to detect drones and if these components meet our requirements

Acoustic method

In order to detect drones out of a recorded sound we first of all need a reference sound to later on compare our recorded sound with. Our reference sound is obviously the sound of the drone. This sound file can be plotted as an audio plot with on the x-axis time and on the y-axis shows the signal’s strength in arbitrary units (Fig 15.1). After using the Fourier transform and normalising this, a plot in the frequency domain can be obtained. In this plot the frequency is on the x-axis and normalised strength on the y-axis (Fig 15.2). The same can be done for the recorded sound which we want to test. In the analysis of the sounds the frequency domain will be reduced to the domain 500 Hz-3000 Hz because our testing drones and the used background noises fall in this domain. The last thing left to do is search in the recorded normalised frequency domain plot (Fig 15.3) for the ‘global shape’ of the normalised frequency domain plot (Fig 15.2) of the reference sound. This part can be done in multiple strategies. I will first explain how this method works and then explain why we choose this method.

Total image.png
Equations1.png

Our goal is to have a set with different frequencies which represent the ‘fingerprint’ of the normalised frequency domain plot of the drone. We obtain this by setting a threshold value (λ) equal to 1. By lowering λ with small values and marking the frequency peaks we encounter with red dots, we can obtain a set with a total of  ‘n’ ‘peak’ frequencies representing this specific sound. The specific value of ‘n’ is with trial and error set to the optimal value of 100 for the reference sound and ‘n’ is set to 250 for the recorded sound. This is plotted in (Fig 15.4) for the drone sound and (Fig 15.5) for the recorded sound. Now two sets with frequencies are obtained representing both their own sounds and the only thing left for us to do is to compare these sets. This is done by (Eq 15.1). So for each frequency of the drone reference sound we will check if there is frequency in the recorded set such that the absolute value of the subtraction of the two is smaller than the tolerance. The tolerance is by trial and error set to 2 Hz. This is needed because the Doppler effect might play a role if the drone moves or because the microphone might pick up frequencies slightly different. The number of frequencies that meet the requirement of (Eq 15.1) (m), can thus lay between 0 and 100, where m=100 is perfect recognition of the drone. We have chosen this method because it is an intuitive concept and relatively easy to implement. In addition, the program is computationally very light and is thus quickly analysable. This number m needs to be transformed to a percentage which represents the change of a drone being present. The most simple conversion would be a linear relationship. However, after conducting a few experiments with varying distances with and without background noise the relationship did not seem to be linear. Until roughly m=20 we know with great certainty that there is not a drone, because in regular background noise often a few peaks will correspond.  In the same way, when m=90 we know with relatively good certainty that there is a drone present but the experiments concluded that, especially with background noise, often not all frequencies from the drone set are reflected in the recorded sound frequency set. Therefore the domain of u The ‘Logistic Formula’ (Eq 15.2) seems to be able to fit this quite well. In the program we have taken a=0.16 and b=55. The value of ‘b’ reflects the number of corresponding peaks at which the certainty of a drone being present is 50%. The number 55 is chosen because this is the middle of the uncertain domain. The values of ‘a’ and ‘b’ are unfortunately not fully validated, as we did not have the proper equipment and the proper amount of time to conduct a thorough experiment.

Visual method

Detecting drones accurately is crucial for various applications, including security and surveillance. While acoustic detection methods provide reliable results by identifying drone sounds in noisy environments, they can be limited in range or accuracy under certain conditions,. To complement acoustic detection, we implemented a visual detection system using the YOLOv8 object detection model, a state of the art computer vision model, and trained it using an online dataset of drone images to then detect drones in real-time.

Examples of a drone being detected by the YOLOv8 model.

The first step was data preparation, which required gathering a comprehensive dataset of drone images captured under varied conditions, including different backgrounds, distances, and angles. Each image in the dataset was manually labeled by drawing bounding boxes around drones, precisely marking their position within each image. This labeling allowed the model to learn the specific features that define a drone. A training configuration file was then created in the YOLO format, specifying the paths to the dataset images and labels, as well as information about the object class—drones in this case. With the dataset ready, I moved to model initialization. To expedite the training process and improve accuracy, I started with the YOLOv8x model initialized with pre-trained weights from the COCO dataset. These weights provided a strong foundation, allowing the model to begin training with general object recognition capabilities, which were subsequently refined to specialize in detecting drones. The pre-trained model's understanding of common object features (such as edges, shapes, and textures) facilitated faster learning and improved accuracy when adapted to the task of drone detection. The training process involved configuring the model with optimized parameters. A moderate image size of 640x640 pixels was selected to balance detection accuracy with training speed. To maximize the use of GPU memory, I adjusted the batch size based on available resources and set a slightly higher learning rate to encourage faster convergence. Training was conducted for approximately 400 epochs, which allowed the model sufficient time to learn the nuances of drone features and improve detection accuracy. Throughout training, the model minimized a loss function comprising three main components: bounding box accuracy, object presence probability, and class confidence. By refining these aspects, the model learned to distinguish drones from other objects and background noise (e.g., leaves). Upon completion, the model’s best-performing checkpoint was saved locally for later use. After training, the YOLOv8 model was deployed for real-time detection. The model was applied to video feeds, where it analyzed each frame individually, drawing bounding boxes around detected drones and accurately distinguishing them from other elements in the background. This capability made the model well-suited for real-time applications, such as live drone feed monitoring or video surveillance. The trained model was highly effective in detecting drones at varying distances and angles, making it robust in complex environments. The resulting real-time drone detection model achieved high accuracy, providing a reliable visual detection layer that can function independently or in conjunction with our acoustic detection method. By combining both acoustic and visual detection methods, we created a versatile, multi-modal system capable of detecting drones with greater reliability across a wide range of environmental conditions. This dual approach allows for improved accuracy in scenarios where one detection method may be insufficient, such as visually complex backgrounds or noisy environments, thus strengthening the overall robustness of our drone detection solution.

Sensor Fusion methodology

From literary research we found five sensor fusion algorithms that could be relevant to our detection system. These being:

  1. Extended Kalman Filter
  2. Convolutional Neural Networks
  3. Bayesian Networks
  4. Decision-Level Fusion with Voting Mechanisms
  5. Deep Reinforcement Learning

The first three listed here we will not be able to apply properly with only a vision-based and an acoustic sensing component. The fifth algorithms will not be possible with our recourses and time limit of the project. Therefore we decided to take a deeper look into Decision-Level Fusion with Voting Mechanisms, and how we could apply this to our tests.

We looked into three different ways of applying this method. The first one being the following, we state there is a drone near if one of the two sensors claimed to have detected a drone. The second one being that we state there is a drone near if both sensors claimed to have detected a drone. Lastly, if one of the sensor claimed there to be a drone, we lowered the threshold for the other sensor, and we required double confirmation from the sensors.

With the first method we found there to be a lot of true negative results, impying that the system would indicate there is a drone near, while this is not the case. This was because our vision-based component was not very accurate.

For the second method we found that it worked better than the first method, but only in specific scenarios. If the drone was far away, then the camera could pick it up, but the microphone simply did not have the range.

The third method gives a balance between these. It does not give false postives as it still needs confirmation from all sensors, but also not as many true negatives as it does not only rely on a single sensor entirely.

However, this still would result in some false positive at times, consider the situation with the drone far away, therefore to adapt it in a better way for our system, we should analyse from the sensors whether another sensor is applicable in the situation. Therefore, if for example the visual-based component detects a drone, and it detects that the drone is a certain amount of meters away, then the acoustic component would be considered invaluable and taken out of the decision making. This is the case because if a drone were to be far away we would know from tested specifications that the microphone can actually detect a drone only up to a certain range, therefore if the camera were to pick up a drone far away, we know that the microphone will never detect this drone, and therefore the component data is invaluable and will not be considered. To determine this exact distance depends on the exact components that would be used, and it would require more testing to determine this accurately. This would work best when actually more than two components would be used, because otherwise in some cases the third method would become the first method which results in a lot of true negatives, which we would like to avoid.

Future Work

Expanding this project could focus on several key areas that enhance functionality, and reliability. Someone that were to expand this project could look into the following areas.

Improvement of Detection Accuracy through Improved Sensor Fusion

We have done research into the different components could be added to the system in regard to drone detection. We tested with two of these but not all, therefore future work could test the other components aswell. Additionally, more testing could be done on sensor fusion when all components are into play.

Testing Interception Method

We have done research into different interception methods, and also how a net could be used to intercept projectiles. Future work could test intercepting objects using a net, determine whether our calculations are correct, how to make a device that holds a net, how to shoot it using the system and how to reload the net into the system.

Field Testing in Diverse Environments

It is important to analyse whether our system would perform as we expect it to do in different environment. Therefore future work could be regarding testing how dynamic the system is and how to improve it to be better at adapting to different environment.

Literary Research

Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not[36]

A significant challenge with autonomous systems is ensuring compliance with international laws, particularly IHL. The paper delves into how such systems can be designed to adhere to humanitarian law and discusses critical and optional features such as the capacity to identify combatants and non-combatants effectively. This is directly relevant to ensuring our system's utility in operational contexts while adhering to ethical norms.

Artificial Intelligence Applied to Drone Control: A State of the Art[12]

This paper explores the integration of AI in drone systems, focusing on enhancing autonomous behaviors such as navigation, decision-making, and failure prediction. AI techniques like deep learning and reinforcement learning are used to optimize trajectory, improve real-time decision-making, and boost the efficiency of autonomous drones in dynamic environments.

Drone Detection and Defense Systems: Survey and Solutions[9]

This paper provides a comprehensive survey of existing drone detection and defense systems, exploring various sensor modalities like radio frequency (RF), radar, and optical methods. The authors propose a solution called DronEnd, which integrates detection, localization, and annihilation functions using Software-Defined Radio (SDR) platforms. The system highlights real-time identification and jamming capabilities, critical for intercepting drones with minimal collateral effects.

Advances and Challenges in Drone Detection and Classification[11]

This state-of-the-art review highlights the latest advancements in drone detection techniques, covering RF analysis, radar, acoustic, and vision-based systems. It emphasizes the importance of sensor fusion to improve detection accuracy and effectiveness.

Autonomous Defense System with Drone Jamming capabilities[37]

This patent describes a drone defense system comprising at least one jammer and at least one radio detector. The system is designed to send out interference signals that block a drone's communication or GPS signals, causing it to land or return. It also uses a technique where the jammer temporarily interrupts the interference signal to allow the radio detector to receive data and locate the drone's position or intercept its control signals.

Small Unmanned Aerial Systems (sUAS) and the Force Protection Threat to DoD[38]

This article discusses the increasing threat posed by small unmanned aerial systems (sUAS) to military forces, particularly the U.S. Department of Defense (DoD). It highlights how enemies are using these drones for surveillance and delivery of explosives.

The Rise of Radar-Based UAV Detection For Military: A Game-Changer in Modern Warfare[10]

This article discusses how radar-based unmanned aerial vehicle (UAV) detection is transforming military operations. SpotterRF’s systems use advanced radar technology to detect drones in all conditions, including darkness or bad weather. By integrating AI, these systems can distinguish between drones and non-threats like birds, improving accuracy and reducing false positives.

Swarm-based counter UAV defense system[39]

This article discusses autonomous systems designed to detect and intercept drones. It emphasizes the use of AI and machine learning to improve the real-time detection, classification, and interception of drones, focusing on autonomous UAVs (dUAVs) that can neutralize threats. The research delves into algorithms and swarm-based defense strategies that optimize how drones are intercepted.

Small Drone Threat Grows More Complex, Deadly as Tech Advances[40]

The article highlights the growing threat of small UAV to military operations. It discusses how these systems are used by enemies for surveillance and direct attacks, and the various countermeasures the U.S. Department of Defense is developing to stop these attacks. It eplores the use of jamming (interference of connection between drone and controller), radio frequency sensing, and mobile detection systems.

US Army invents 40mm grenade that nets bad drones[41]

This article discusses recently developed technology that involves a 40mm grenade that deploys a net to capture and neutralise hostile drones. This system can be fired from a standard grenade launcher, providing a portable, low-cost method of taking down small unmanned aerial systems (sUAS) without causing significant collateral damage.


Making drones to kill civilians: is it ethical?[42]

Usually, anything where harm is done to innocent people is seen as unethical. This would mean that every company which is somehow providing for items in war would do something which is at least partially unethical. However, during war an international law states that a country is not limited by all traditional ethics. This makes deciding on what is ethical and what not harder.

Sociocultural objections to using killer drones against civilians:

-       Civilians (not in war) are getting killed by drones since the drones are not able to see the difference between people in war and people not in war

-       We should not see war as a ‘clash of civilizations’ as this would induce that civilians are also part of war

Is it ethical to use drones to kill civilians?:

-       As said above, an international law applies during war between countries. This law implies:

o  Killing civilians = murder = prohibited

-       People getting attacked by drones, say that it is not the drones who kill people, but people kill people

A simple solution is to follow the 3 laws of robotics from Isaac Asimov:

-       A robot may not injure a human being or allow a human being to come to harm

-       A robot must obey orders given to it by human beings except when such orders conflict with the first law

-       A robot must protect its own existence, provided such protection does not conflict with the first or second law

But following these laws would be too simple, as these laws are not actual laws

The current drone killer’s rationale:

-       A person is targeted only if harmful to the interests of this country so lang as he/she remains alive

This rationale is objected by:

-       This rationale is simply assumed since the international law says nothing about random targeting of individuals

This objection is disproved by:

-       If the target is not in warzone, it is not harmful to the interests of the country, thus such a person would not be a random person

Is it legal and ethical to produce drones that are used to kill civilians?:

A manufacturer of killer drones may not assume its drones are only being used peacefully.

The manufacturers of killer drones often have cautionary recommendations, which are there to put these manufacturers in a legally safe place.

Conclusion:

The problem is that drone killing is not covered in the traditional war laws. Literature is not uniform in opposition to drone killing, but the majority states that killing civilians is unethical.


Ethics, autonomy, and killer drones: Can machines do right?[43]

The article looks into the ethics of certain weapons used in war (in the US). Since we can view back on new weapons back then in war (like atomic bombs) we can see if what they thought then, is what we think now is ethical. The article uses two different viewpoints to decide the ethics of a war technology, namely teleology and deontology. Teleology is focused on the outcome of an action, while deontology focusses more on the duty of an action.

The article looks first at the atomic bomb, which according to a teleologic viewpoint could be seen as ethical, as it would bring an end to war quickly which saves lives in the long term. Deontology also says it could be ethical since it would show superiority to have such strong weapons, which intimidates other countries in war.

Next up in discussion in a torture program. According to teleology this is an ethical thing to do, since torturing some people, to extract critical information from them could be used to prevent more deaths in the future.

Now the article questions AI-enabled drones. For AI ethics, the AI should always be governed by humans, bias should be reduced (lots of civilians are getting killed now) and there should be more transparency.  As for a country this is more challenging since they also have to focusses on safety and winning a war. This is why, in contrast to with the atomic bomb, where teleology and deontology said the same, there now is a contrast between teleology and deontology. Teleology wants to focus on outcome, thus security and protection. Deontology focusses on global values, like human rights. The article says the challenge is to use AI technologies effective while following ethical principles and letting everyone do this.


Survey on anti-dron systems: components, designs, and challenges[44]

Requirements an anti-drone system must have:

-       Drone specialized detection (detect the drone)

-       Multi drone defensibility (Defend for multiple drones)

-       Cooperation with security organizations (Restrictions to functionality should be discussed with public security systems (police/military)

-       System portability (lightweight and use wireless networks)

-       Non-military neutralization (Don’t use military weapons to defend for drones)

Ways to detect drones:

-       Thermal detection (Motors, batteries and internal hardware produce heat)

o  Works in all weather

o  Affordable

o  Not too much range

-       RF scanner (capture wireless signals)

o  Can’t detect drones what don’t produce RF signals

o  Long range

-       Radar detection (Detect objects and determine the shape)

o  Long range

o  Can’t see the drone if it is not moving since it thinks it is an obstacle

-       Optical camera detection (detect from a video)

o  Short range

o  Weather dependant

Hybrid detection systems to detect drones

-       Radar + vision

-       Multiple RF scanners

-       Vision + acoustic

Drone neutralization:

-       Hijacking/spoofing (Create fake signal to prevent drone from moving)

-       Geofencing (Prevent drone from approaching a certain point)

-       Jamming (Stopping radio communication between drone and controller)

-       Killer drones (Using drones to damage attacking drones)

-       Capture (Physically capture a drone) (for example with a net)

o  Terrestrial capture systems (human-held or vehicle-mounted)

o  Aerial capture systems (System on defender drones)

Determination of threat level:

-       Object

-       Flight path

-       Available time

(NOTE: The article goes into more depth about some mathematics to determine the threat level, which could be used in our system)


Artificial intelligence, robotics, ethics, and the military: a Canadian perspective[45]

The article not only looks at the ethics, but also the social and legal aspects of using artificial intelligence in the military. For this it looks at 3 main aspects of AI, namely Accountability and Responsibility, Reliability and Trust.

Accountability and Responsibility:

The article states that the accountability and responsibility of the actions of an AI system are for the operator, which is a human. However, when the AI malfunctions it becomes challenging to determine who is accountable.

Reliability:

AI now is not reliable enough and only performs well in very specific situations where it is made for. During military usage you never know in what situation an AI will be in, thus causing a lack in reliability. A verification of AI technologies is necessary, especially when you are dealing with live and death of humans.

Trust:

People who use AI in military should be thought how the AI works and to what extend they can trust the AI. Too much or too little trust in AI can lead to big mistakes. The makers of these AI systems should be more transparent so it can be understood what the AI does.

We need to have a proactive approach to minimize the risks we have with AI. This means that everyone who uses or is related to AI in military should carefully consider the risks that AI brings.


When AI goes to war: Youth opinion, fictional reality and autonomous weapons[46]

The article looks into the responsibilities and risks of fully autonomous robots in war. It does this by asking youth participants about this together with other research and theory.

The article found that the participants felt that humans should be responsible for actions of autonomous robots. This can be supported by theory which says that since robots do not have emotions like humans do, they cannot be responsible for their actions in the same way as humans. If autonomous robots were programmed with some ethics in mind, the robot could in someway be accounted for its actions as well. How this responsibility between humans and robots should be divided became unclear in this article. Some said responsibility was purely for the engineers, designs and government, while others said that the human and robot had a shared responsibility.

The article also found that there were still fears for fully autonomous robots. This came from old myths and social media which say that autonomous robots can turn against humans to destroy them.

As for the legal part of autonomous robots, they can potentially violate laws during war, especially if they are not accounted responsible for their actions. This causes worries for the youth.

The threats that fully autonomous robots bring outweigh the benefits for the youth. This is a sign for the scientific community to further develop and implement norms and regulations in autonomous robots.


Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review[47]

Summary:

•The paper provides a comprehensive review of drone detection and classification technologies. It delves into the various methods employed to detect and identify drones, including radar, radio frequency (RF), acoustic, and vision-based systems. Each has their strengths and weaknesses, after which the author discusses 'sensor fusion', where the combination of detection methods lead to improvements of system performance and robustness.

Key takeaways:

•Sensor fusion should be incorporated into system to improve performance and robustness

Counter Drone Technology: A Review[48]

Summary:

•The article provides a comprehensive analysis of current counter-drone technologies and categorizes counter-drone systems into three main groups: detection, identification, and neutralization technologies. Detection technologies include radar, RF detection, etc. Once a drone is detected, it must be identified as friend or foe. The review discusses methods such as machine learning algorithms and signature signal libraries. It covers various neutralization methods, including jamming (RF and GPS), laser-based systems, and kinetic solutions like nets or projectiles, and the challenges each method faces.

Key takeaways:

•Integration of multiple sensor technologies is critical

•Non-kinetic neutralization methods should be prioritized where possible to avoid unintended consequences

A Soft-Kill Reinforcement Learning Counter Unmanned Aerial System (C-UAS) with Accelerated Training[49]

Summary:

•This article discusses the development of a counter-drone system that utilizes reinforcement learning using non-lethal (“soft-kill") techniques. The system is designed to learn and adapt to various environments and drone threats using simulated environments.

Key takeaways:

•C-UAS systems must be rapidly deployable

•C-UAS systems should be trained in simulated environments to improve robustness and adaptability

Terrorist Use of Unmanned Aerial Vehicles: Turkey's Example[50]

Summary:

•The article examines how terrorist organizations have utilized drones for surveillance, intelligence gathering, and attacks. It highlights the growing accessibility of consumer drones, which are repurposed for malicious use and various counter-UAV technologies and tactics employed by Turkish forces to mitigate this threat.

Key takeaways:

•Running costs must be kept minimal. Access to affordable drones is widespread.

•Both kinetic and non-kinetic interception must be available if the system is to be used in urban or otherwise populated environments

Impact-point prediction of trajectory-correction grenade based on perturbation theory[51]

Summary:

•The article discussed trajectory prediction and correction methods for the use case of improving the accuracy of artillery projectiles. By modeling and simulating the effects of small perturbations in projectile flight, the study proposes an impact-point prediction algorithm. While this algorithm can be applied to improving artillery accuracy, it could potentially be used to predict the trajectory and impact location of drone-dropped explosives.

Key takeaways:

•Detailed description of real-time trajectory prediction corrections

•Challenge to balance efficiency with accuracy in path prediction algorithms


Armed Drones and Ethical Policing: Risk, Perception, and the Tele-Present Officer[52]

This paper talks about the tele-officier on ‘unmanned drones’. This paper looks at it from the point of view of attacking, but it can be looked at from the point of view of ‘attacking’ incoming drones where still a person should or should not ‘pull the trigger’ to intercept a drone, with the potential risks of redirecting it at another crowd.


The Ethics and Legal Implications of Military Unmanned Vehicles[53]

This papers states that human soldiers/marines also do not agree on what is ethical warfare. They give a few examples on which questions have controversial answers under the soldiers/marines. (we may use this to argue why/why not our device should be autonomous or not.


Countering the drone threat implications of C-UAS technology for Norway in an EU an NATO context[54]

This paper gives clear insight in different scenarios where drones can be a threat. For example on large crowds but also in warfare. This paper does however not give a concrete solution.


An anti-drone device based on capture technology[55]

This paper explores the capabilities of capturing a drone with a net. It also addresses some other forms of anti drone devices, such as lasers, hijacking, rf jamming…

For the rest is this paper very technical in the net captering.


Four innovative drone interceptors.[56]

This paper states 5 different ways of detecting drones. Acoustic detection and tracking with microphones positioned in a particular grid, video detection by cameras, thermal detection, radar detection and tracking and as last the detection through radio emissions from the drone. Because we want to also be able to catch  ‘off-the-shelf’ drones we have to investigate which ones are appropriate. For taking down the drone they give 6 options: missile launch, radio jamming, net throwers, machine guns, lasers, drone interceptors. The 4 drone interceptors they introduce are for us a bit above budget, as they are real drones with various kinds of generators to take down a drone (for example with a high electric pulse), but we could still look into this.


Comparative Analysis of ROS-Unity3D and ROS-Gazebo for Mobile Ground Robot Simulation[57]

This paper examines the use of Unity3D with ROS versus the more traditional ROS-Gazebo for simulating autonomous robots. It compares their architectures, performance in environment creation, resource usage, and accuracy, finding that ROS-Unity3D is better for larger environments and visual simulation, while ROS-Gazebo offers more sensor plugins and is more resource-efficient for small environments.

Logbook

Week 1
Name Total Break-down
Max van Aken 10h Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [21], [22], [23], [24], [25] (5h), Summarized and described key takeaways for papers /patents [21], [22], [23], [24], [25] (1h)
Robert Arnhold 16h Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [11], [12], [13], [14], [15] (10h), Summarized and described key takeaways for papers/patents [11], [12], [13], [14], [15] (2h)
Tim Damen 16h Attended lecture (2h), Attended meeting with group (2h), Analysed papers [12], [13], [14], [15], [16] (10h), Summarized and described key takeaways for papers [12], [13], [14], [15], [16] (2h)
Ruben Otter 17h Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [1], [2], [3], [4], [5] (10h), Summarized and described key takeaways for papers/patents [1], [2], [3], [4], [5] (2h), Set up Wiki page (1h)
Raul Sanchez Flores 16h Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [6], [7], [8], [9], [10] (10h), Summarized and described key takeaways for papers/patents [6], [7], [8], [9], [10] (2h),
Week 2
Name Total Break-down
Max van Aken 13h Attended lecture (30min), Attended meeting with group (1h), Research kinds of situations of device (2h), wrote about situations (1,5h), research ethics (6h), write ethics (2h)
Robert Arnhold 6.5h Attended lecture (30min), Attended meeting with group (1h), Worked on interview questions (1h), Organizing introductory interview (2h), Preparing interviews for next weeks(2h)
Tim Damen 13.5h Attended lecture (30min), Attended meeting with group (1h), Risk evaluation (2h), Important features (1h), Research on ethics of deflection (8h), Writing part about deflection (1h)
Ruben Otter 14.5h Attended lecture (30min), Attended meeting with group (1h), Analysed papers [2], [3], [4], [5] for research in drone detection (6h), Wrote about drone detection and its relation to our system using papers [2], [3], [4], [5] (4h), Analysed and summarized paper [6] (2h), Wrote about usage of simulation and its software (1h)
Raul Sanchez Flores 14.5h Attended lecture (30min), Attended meeting with group (1h) Researched and analysed papers about approaches to drone interception (4h) Researched and analysed papers about drone interception (6h), Evaluated different existing drone interception (3h)
Week 3
Name Total Break-down
Max van Aken 6h Attended lecture (30min), Meeting with group (1.5h), researched ethics (2h), rewriting ethics (2h)
Robert Arnhold 13h Attended lecture (30min), Meeting with group (1.5h), Completed interview (4h), planning next interview (1h), conceptualizing mechanical-side (3h), state-of-the-art research and review of previously prepared sources (3h)
Tim damen 10h Attended lecture (30min), Meeting with group (1.5h), Analysed papers [6], [7], [8], [9] [10], [11] (6h), Rewrote ethics of deflection based on autonomous cars (2h)
Ruben Otter 10h Attended lecture (30min), Meeting with group (1.5h), Research into possible drone detection components (6h), Created list of possible prototype components (2h)
Raul Sanchez

Flores

9.5h Attended lecture (30min), Meeting with group (1.5h) Make interception methods comparison more in-depth (5h) Created Pugh Matrix to compare different interception methods (2h) Email Ruud about components we need (30min)

Week 4

Name Total Break-down
Max van Aken 11h Attended lecture (30min), Meeting with group (1.5h), researched default drone specifications (2h) done calculation on mass prediction (6h), mathematica calculations (1h)
Robert Arnhold 13h Attended lecture (30min), Meeting with group (1.5h), investigating mechanical design concepts and documenting process (7h), summarizing discussion and learnings (2h, not yet finished), contacting new interviewees (2h)
Tim damen 17h Attended lecture (30min), Meeting with group (1.5h), Analysed 4 papers (4h), Research on how to catch a projectile with a net (6h), Written text on how to catch a projectile with a net (2h), Worked on a Mathematica script for calculations to catch a projectile (3h)
Ruben Otter 15h Attended lecture (30min), Meeting with group (1.5h), Attended meeting with Ruud (1h), Setup Raspberry Pi (2h), Research into audio sensors (2h), Started coding and connection audio sensor with Raspberry Pi(8h)
Raul Sanchez

Flores

Attended lecture (30min), Meeting with group (1.5h), Attended meeting with Ruud (1h), Write Specifications for our design (6h)

Week 5

Max van Aken 6h Attended lecture (30min), Meeting with group (1.5h), spring calculations (4h)
Robert Arnhold 11h Attended lecture (30min), Meeting with group (1.5h), discussing schedule with potential new interviewee, developing mechanical design (2h), summarizing focus and key learnings from previous interview (2h), investigating mechanical components, materials, structures, and other design features (5h)
Tim damen 17h Attended lecture (30min), Meeting with group (1.5h), Finalize Mathematica to catch projectile in 2D (1h), Research on catching projectile in 3D (4h), Work on Mathematica to catch projectile in 3D (6h), Analyses on accuracy of 2D model (2h), Writing the text for on the wiki (1h), General research to write text and do calculations (1h)
Ruben Otter 11h Attended lecture (30min), Meeting with group (1.5h), Continued integrating microphone sensor with Raspberry Pi (9h)
Raul Sanchez

Flores

Attended lecture (30min), Meeting with group (1.5h)

Week 6

Max van Aken 8h Attended lecture (30min), Meeting with group (1.5h), begin with presentation/text (6h)
Robert Arnhold 10h Attended lecture (30min), Meeting with group (1.5h), performing second interview with volunteer in Ukraine (2h), collecting and structuring notes for review (2h), continuing mechanical design research and formalizing CAD design (4h)
Tim damen 8h Attended lecture (30min), Meeting with group (1.5h), Worked on creating the 3D model in Mathematica (5h), written part about the 3D model (1h)
Ruben Otter 11h Attended lecture (30min), Meeting with group (1.5h), Dowloaded and learned MatLab syntax (3h), Research into different types of drones (2h), Finding drone sound files for specific drones (1h), Coded in MatLab to analyse frequency of the sample drone sound (3h)
Raul Sanchez

Flores

Attended lecture (30min), Meeting with group (1.5h)

Week 7

Max van Aken Attended lecture (30min), Meeting with group (1.5h)
Robert Arnhold 12h Attended lecture (30min), Meeting with group (1.5h), summarizing secondary interview notes into key takeaways and confirmed specifications (3h), completing CAD design for given specifications (2h), 3D printing main body of system (4h) and constructing body of prototype (1h)
Tim damen 12.5h Attended lecture (30min), Meeting with group (1.5h), Research on type of projectiles, dimensions, effects of air resistance on these projectiles and on net (4h), Adapt model with wind (0.5h), Specified assumptions made based on research done (2h), Tested accuracy of 3D model (2h), Written text about accuracy of 3D model (1h), Cleared up some parts on the wiki page (1h)
Ruben Otter 16h Attended lecture (30min), Meeting with group (1.5h), Continue coding in MatLab to analyse the frequency of the sample drone sound and comparing these with other sound files (4h), Research into Sensor Fusion (3h), Apply Sensor Fusion methodology on the test results(2h), Create presentation(2h), Prepare for presentation(3h)
Raul Sanchez

Flores

Attended lecture (30min), Meeting with group (1.5h)

Week 8

Max van Aken Attended lecture (30min), Meeting with group (1.5h)
Robert Arnhold Attended lecture (30min), Meeting with group (1.5h)
Tim damen 5h Attended lecture (30min), Meeting with group (1.5h), Change order on wiki and improve some small parts (1h), Finished up last pieces of my text (1h), Improved the photos (1h)
Ruben Otter 10.5h Attended lecture (30min), Meeting with group (1.5h), Prepare for presentation(1h), Present the presentation(30min), Some additional research into Sensor Fusion(2h), Write part about sensor fusion from a theoretical point of view(2h), Write part about how we applied certain sensor fusion methodology in our testing(2h), Write future work section(1h)
Raul Sanchez

Flores

Attended lecture (30min), Meeting with group (1.5h)

References

  1. How to read a risk matrix used in a risk analysis (assessor.com.au)
  2. The ethics of driverless cars | ACM SIGCAS Computers and Society
  3. The redirection of attacks by defending forces | International Review of the Red Cross (icrc.org)
  4. Is Society Ready for AI Ethical Decision Making? Lessons from a Study on Autonomous Cars - ScienceDirect
  5. Autonomous Cars: In Favor of a Mandatory Ethics Setting | Science and Engineering Ethics (springer.com)
  6. Autonomous decision making for a driver-less car | IEEE Conference Publication | IEEE Xplore
  7. IJCAI17-AlgorithmicBias-Distrib (cmu.edu)
  8. BBC - Ethics - War: In an ethical war, whom can you fight?
  9. 9.00 9.01 9.02 9.03 9.04 9.05 9.06 9.07 9.08 9.09 9.10 9.11 Chiper F-L, Martian A, Vladeanu C, Marghescu I, Craciunescu R, Fratu O. Drone Detection and Defense Systems: Survey and a Software-Defined Radio-Based Solution. Sensors. 2022; 22(4):1453. https://doi.org/10.3390/s22041453
  10. 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 10.11 The rise of Radar-Based UAV Detection for Military: A Game-Changer in Modern Warfare. (2024, June 11). Spotter Global. https://www.spotterglobal.com/blog/spotter-blog-3/the-rise-of-radar-based-uav-detection-for-military-a-game-changer-in-modern-warfare-8
  11. 11.00 11.01 11.02 11.03 11.04 11.05 11.06 11.07 11.08 11.09 11.10 11.11 11.12 11.13 11.14 11.15 11.16 11.17 11.18 11.19 11.20 11.21 11.22 11.23 11.24 11.25 11.26 11.27 11.28 11.29 11.30 11.31 11.32 11.33 11.34 11.35 11.36 Seidaliyeva U, Ilipbayeva L, Taissariyeva K, Smailov N, Matson ET. Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. Sensors. 2024; 24(1):125. https://doi.org/10.3390/s24010125
  12. 12.0 12.1 12.2 12.3 12.4 Caballero-Martin D, Lopez-Guede JM, Estevez J, Graña M. Artificial Intelligence Applied to Drone Control: A State of the Art. Drones. 2024; 8(7):296. https://doi.org/10.3390/drones8070296
  13. 13.0 13.1 13.2 13.3 13.4 13.5 13.6 13.7 Svanström, F.; Alonso-Fernandez, F.; Englund, C. Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities. Drones 2022, 6, 317. https://doi.org/10.3390/drones6110317
  14. 14.0 14.1 14.2 14.3 14.4 Samaras, S.; Diamantidou, E.; Ataloglou, D.; Sakellariou, N.; Vafeiadis, A.; Magoulianitis, V.; Lalas, A.; Dimou, A.; Zarpalas, D.; Votis, K.; et al. Deep Learning on Multi Sensor Data for Counter UAV Applications—A Systematic Review. Sensors 2019, 19, 4837. https://doi.org/10.3390/s19224837
  15. -       Autonomous Ball Catcher Part 1: Hardware — Baucom Robotics
  16. -       Ball Detection and Tracking with Computer Vision - InData Labs
  17. -       Detecting Bullets Through Electric Fields – DSIAC
  18. -       An Introduction to BYTETrack: Multi-Object Tracking by Associating Every Detection Box (datature.io)
  19. -       Online Trajectory Generation with 3D camera for industrial robot - Trinity Innovation Network (trinityrobotics.eu)
  20. -       Explosives Delivered by Drone – DRONE DELIVERY OF CBNRECy – DEW WEAPONS Emerging Threats of Mini-Weapons of Mass Destruction and Disruption ( WMDD) (pressbooks.pub)
  21. -       Deadliest weapons: The high-explosive hand grenade (forcesnews.com)
  22. -       SM2025.pdf (myu-group.co.jp)
  23. -       Trajectory estimation method of spinning projectile without velocity input - ScienceDirect
  24. -       An improved particle filtering projectile trajectory estimation algorithm fusing velocity information - ScienceDirect
  25. -       (PDF) Generating physically realistic kinematic and dynamic models from small data sets: An application for sit-to-stand actions (researchgate.net)
  26. https://kestrelinstruments.com/mwdownloads/download/link/id/100/
  27. Normal and Tangential Drag Forces of Nylon Nets, Clean and with Fouling, in Fish Farming. An Experimental Study (mdpi.com)
  28. A model for the aerodynamic coefficients of rock-like debris - ScienceDirect
  29. Explosives Delivered by Drone – DRONE DELIVERY OF CBNRECy – DEW WEAPONS Emerging Threats of Mini-Weapons of Mass Destruction and Disruption ( WMDD) (pressbooks.pub)
  30. Influence of hand grenade weight, shape and diameter on performance and subjective handling properties in relations to ergonomic design considerations - ScienceDirect
  31. 'Molotov Cocktail' incendiary grenade | Imperial War Museums (iwm.org.uk)
  32. My Global issues - YouTube
  33. ⁣4 Types of Distance Sensors & How to Choose the Right One | KEYENCE America
  34. Ukrainian Mountain Battalion drop grenades on Russian forces with weaponised drone (youtube.com)
  35. Across, M. (2024, September 27). Surviving 25,000 Miles Across War-torn Ukraine | Frontline | Daily Mail. YouTube. https://youtu.be/kqKGYn13MeM?si=VPhO7jFG0sHQQXiW
  36. Willy, Enock, Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not (NOVEMBER 16, 2020). Available at SSRN: https://ssrn.com/abstract=3867978 or http://dx.doi.org/10.2139/ssrn.3867978
  37. Chmielus, T. (2024). Drone defense system (U.S. Patent No. 11,876,611). United States Patent and Trademark Office. https://patentsgazette.uspto.gov/week03/OG/html/1518-3/US11876611-20240116.html
  38. Kovacs, A. (2024, February 1). Small Unmanned aerial Systems (SUAS) and the force protection threat to DOD. RMC. https://rmcglobal.com/small-unmanned-aerial-systems-suas-and-the-force-protection-threat-to-dod/
  39. Brust, M. R., Danoy, G., Stolfi, D. H., & Bouvry, P. (2021). Swarm-based counter UAV defense system. Discover Internet of Things, 1(1). https://doi.org/10.1007/s43926-021-00002-x
  40. Small drone threat grows more complex, deadly as tech advances. (n.d.). https://www.nationaldefensemagazine.org/articles/2023/8/30/small-drone-threat-grows-more-complex-deadly-as-tech-advances
  41. Technology for innovative entrepreneurs & businesses | TechLink. (n.d.). https://techlinkcenter.org/news/us-army-invents-40mm-grenade-that-nets-bad-drones
  42. Making Drones to Kill Civilians: Is it Ethical? | Journal of Business Ethics (springer.com)
  43. Full article: Ethics, autonomy, and killer drones: Can machines do right? (tandfonline.com)
  44. IEEE Xplore Full-Text PDF:
  45. https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2848
  46. When AI goes to war: Youth opinion, fictional reality and autonomous weapons - ScienceDirect
  47. Seidaliyeva, U., Ilipbayeva, L., Taissariyeva, K., Smailov, N., & Matson, E. T. (2024). Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. Sensors, 24(1), 125. https://doi.org/10.3390/s24010125
  48. Gonzalez-Jorge, Higinio & Aldao, Enrique & Fontenla-Carrera, Gabriel & Veiga Lopez, Fernando & Balvís, Eduardo & Ríos-Otero, Eduardo. (2024). Counter Drone Technology: A Review. 10.20944/preprints202402.0551.v1.
  49. Silva, Douglas & Machado, R. & Coutinho, Olympio & Antreich, Felix. (2023). A Soft-Kill Reinforcement Learning Counter Unmanned Aerial System (C-UAS) with Accelerated Training. IEEE Access. PP. 1-1. 10.1109/ACCESS.2023.3253481.
  50. Şen, Osman & Akarslan, Hüseyin. (2020). Terrorist Use of Unmanned Aerial Vehicles: Turkey's Example.
  51. Wang, Yu & Song, W.-D & Song, X.-E & Zhang, X.-Q. (2015). Impact-point prediction of trajectory-correction grenade based on perturbation theory. 27. 18-23.
  52. Armed Drones and Ethical Policing: Risk, Perception, and the Tele-Present Officer https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8367046/ Published online 2021 Jun 19. doi: 10.1080/0731129X.2021.1943844
  53. The Ethics and Legal Implications of Military Unmanned Vehicles,y Elizabeth Quintana, Head of Military Technology & Information Studies Royal United Services Institute for Defence and Security Studies https://static.rusi.org/assets/RUSI_ethics.pdf
  54. Countering the drone threat implications of C-UAS technology for Norway in an EU an NATO context https://www.researchgate.net/profile/Bruno-Martins-4/publication/348189950_Countering_the_Drone_Threat_Implications_of_C-UAS_technology_for_Norway_in_an_EU_and_NATO_context/links/5ff3240492851c13feeb0e08/Countering-the-Drone-Threat-Implications-of-C-UAS-technology-for-Norway-in-an-EU-and-NATO-context.pdf
  55. An anti-drone device based on capture technology Yingzi Chen, Zhiqing Li, Longchuan Li, Shugen Ma, Fuchun Zhang, Chao Fan, https://doi.org/10.1016/j.birob.2022.100060 https://www.sciencedirect.com/science/article/pii/S2667379722000237
  56. Four innovative drone interceptors. Svetoslav ZabunovB, Garo Mardirossian,https://doi.org/10.7546/CRABS.2024.02.09
  57. Platt, J., Ricks, K. Comparative Analysis of ROS-Unity3D and ROS-Gazebo for Mobile Ground Robot Simulation. J Intell Robot Syst 106, 80 (2022). https://doi.org/10.1007/s10846-022-01766-2