PRE2024 1 Group2: Difference between revisions
(120 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
== '''Group Members''' == | =='''Group Members''' == | ||
{| class="wikitable" | {| class="wikitable" | ||
|+ | |+ | ||
Line 16: | Line 16: | ||
|Tim Damen | |Tim Damen | ||
|1874810 | |1874810 | ||
|Applied Physics | | Applied Physics | ||
|- | |- | ||
|Ruben Otter | | Ruben Otter | ||
|1810243 | |1810243 | ||
|Computer Science | |Computer Science | ||
Line 27: | Line 27: | ||
|} | |} | ||
== '''Problem Statement''' == | == '''Problem Statement'''== | ||
In modern warfare drones play a huge role. Drones are relatively cheap to make and deal a lot of harm, while not a lot is done against them. There exists large anti-drone systems which protect important areas from being attacked by drones. Individuals, which are at the front line, are not protected by such anti-drone systems as they are expensive and too large to carry around. This makes individuals at the front lines vulnerable to drone attacks. We aim to show that an anti-drone system can be made cheap, lightweight and portable to protect these vulnerable individuals. | |||
== '''Objectives''' == | =='''Objectives'''== | ||
To | To show that an anti-drone system can be made cheap, lightweight and portable we do the following things: | ||
* Determine how drones and projectiles can be detected. | *Explore and determine ethical implications of the portable device. | ||
* Determine how a drone or projectile can be intercepted and/or redirected. | *Determine how drones and projectiles can be detected. | ||
* Build a prototype of this portable device. | *Determine how a drone or projectile can be intercepted and/or redirected. | ||
* | *Build a prototype of this portable device. | ||
* Create a model for the interception | |||
* Prove the system’s utility. | *Prove the system’s utility. | ||
== '''Planning''' == | =='''Planning'''== | ||
Within | Within the upcoming 8 weeks we will be working on this project. The table below shows when we aim to finish the different tasks within the 8 weeks of the project. | ||
{| class="wikitable" | {| class="wikitable" | ||
|+ | |+ | ||
Line 60: | Line 60: | ||
|- | |- | ||
| | | | ||
|Start constructing prototype | |Start constructing prototype and software. | ||
|- | |- | ||
| | | | ||
Line 66: | Line 66: | ||
|- | |- | ||
|4 | |4 | ||
|Continue constructing prototype | |Continue constructing prototype and software | ||
|- | |- | ||
|5 | |5 | ||
|Finish prototype | |Finish prototype and software | ||
|- | |- | ||
|6 | |6 | ||
Line 77: | Line 77: | ||
|Evaluate testing results and make final changes. | |Evaluate testing results and make final changes. | ||
|- | |- | ||
|7 | | 7 | ||
|Create final presentation. | |||
|- | |||
|8 | |||
|Finish Wiki page. | |Finish Wiki page. | ||
|} | |||
=='''Risk Evaluation'''== | |||
A risk evaluation matrix can be used to determine where the risks are within our project. This is based on two factors: the consequence if a task is not fulfilled and the likelihood that this happens. Both of these factors are rated on a scale from 1 to 5 and using the matrix below a final risk is determined. This can be a low, medium, high or critical risk. Knowing the risks beforehand gives the ability to prevent failures from occurring as it is known where special attention is required. | |||
<ref>How to read a risk matrix used in a risk analysis (assessor.com.au)</ref> | |||
[[File:Imaged.png|thumb|Risk evaluation matrix|426x426px]] | |||
{| class="wikitable" | |||
|'''Task''' | |||
|'''Consequence (1-5)''' | |||
|'''Likelihood (1-5)''' | |||
|'''Risk''' | |||
|- | |- | ||
| | |Collecting 25 articles for the SoTA | ||
| | |1 | ||
|1 | |||
|Low | |||
|- | |||
|Interviewing front line soldier | |||
|1 | |||
|2 | |||
|Low | |||
|- | |||
|Finding features for our system | |||
|4 | |||
|1 | |||
|Medium | |||
|- | |||
|Making a prototype | |||
|3 | |||
|3 | |||
|Medium | |||
|- | |||
|Make the wiki | |||
|5 | |||
|1 | |||
|Medium | |||
|- | |||
|Finding a detection method for drones and projectiles | |||
| 4 | |||
|1 | |||
|Medium | |||
|- | |||
|Determine (ethical) method to intercept or redirect drones and projectiles | |||
|5 | |||
|1 | |||
|Medium | |||
|- | |||
|Prove the systems utility | |||
|5 | |||
|2 | |||
|High | |||
|} | |} | ||
== '''Users and their Requirements''' == | =='''Target Market Interviews'''== | ||
=== '''<u>Interview 1: Introductory interview with F.W., Australian Military Officer Cadet (military as a potential user)</u>''' === | |||
<u>Q1: What types of anti-drone (/projectile protection) systems are currently used in the military?</u> | |||
* A: The systems currently in use include the Trophy system from Israel, capable of intercepting anti-tank missiles and potentially intercepting dropped payload explosives or drones. Another similar system is the Iron Fist, also from Israel. Additionally, Anduril's Lattice system offers detection and identification capabilities within a specified operational area, capable of detecting all possible threats and, if necessary, intercepting them. | |||
<u>Q2: What are key features that make an anti-drone system effective in the field?</u> | |||
* A: Effective systems are often deployed in open terrains like fields and roads, as drones have difficulty navigating dense forests. An effective anti-drone system could include a training or tactics manual to optimize use in these environments. Key features also include a comprehensive warning system that can alert troops on the ground to incoming drones, allowing them to take cover. Systems should not focus solely on interception but also on early detection. | |||
<u>Q3: What are the most common types of drones that these systems are designed to counter?</u> | |||
* A: The systems are primarily designed to counter consumer drones and homemade FPV drones, which are known for their speed and accuracy. These drones are incredibly effective and easy to construct. | |||
<u>Q4: What are the most common types of drone interception that these systems employ?</u> | |||
* A: The common types of interception include non-kinetic methods such as RF and optical laser jamming, which have a low cost-to-intercept. Kinetic methods include shooting with regular rounds, using net charges, or employing blast interception techniques such as those used in the Iron Fist system. High-cost methods like air-burst ammunition are also utilized due to their high interception likelihood. | |||
<u>Q5: Are there any specific examples of successful or failed anti-drone operations you could share?</u> | |||
* A: No specific examples were shared during the interview. | |||
<u>Q6: How are drones/signals determined to be friendly or not?</u> | |||
* A: The identification process was not detailed in the interview. | |||
<u>Q7: What are the most significant limitations of current anti-drone systems?</u> | |||
* A: Significant limitations include the high cost, which makes these systems unaffordable for individual soldiers or small groups. Most systems are also complex and require vehicle mounts for transportation, making them less suitable for quick or discreet maneuvers. | |||
<u>Q8: Are there any specific environments where anti-drone systems struggle to perform well?</u> | |||
* A: These systems typically perform well in large open areas but may struggle in environments with dense vegetation, which can offer natural cover for troops but limit the functionality of the systems. | |||
<u>Q9: Can you give a rough idea of the costs involved in deploying these systems?</u> | |||
* A: The costs vary widely, and detailed pricing information is generally available on a need-to-know basis, making it difficult to provide specific figures without direct inquiries to manufacturers. | |||
<u>Q10: Which ethical concerns may be associated with the use of anti-drone systems, particularly regarding urban, civilian areas?</u> | |||
* A: Ethical concerns are significant, especially regarding the deployment in civilian areas. The military undergoes extensive ethical training, and all new systems are evaluated for their ethical implications before deployment. | |||
<u>Q11: What improvements do you think are necessary to make anti-drone systems more effective?</u> | |||
* A: The interview did not specify particular improvements but highlighted the need for systems that can be easily deployed and operated by individual soldiers. | |||
<u>Q12: Do you think AI or machine learning could help enhance anti-drone systems?</u> | |||
* A: The potential for AI and machine learning to enhance these systems is recognized, with ongoing research into their application in anti-drone and anti-missile technology. | |||
<u>Q13: Is significant training required for personnel to effectively operate anti-drone systems?</u> | |||
* A: The level of training required varies, but there is a trend towards developing systems that require minimal training, allowing them to be used effectively straight out of the box. | |||
<u>Q14: How do these systems usually handle multiple drone threats or swarm attacks?</u> | |||
* A: Handling multiple threats involves a combination of detection, tracking, and engagement capabilities, which were not detailed in the interview. | |||
<u>Q15: How are these systems tested and validated before they are deployed in the field?</u> | |||
* A: Systems undergo rigorous testing and validation processes, often conducted by military personnel to ensure effectiveness under various operational conditions. | |||
=== '''<u>Interview 2: Interview with B.D. to specify requirements for system, Volunteer on Ukrainian front lines (volunteers and front line soldiers as potential users)</u>''' === | |||
<u>What types of anti-drone systems are currently used near the front lines?</u> | |||
*Mainly see RF jammers and basic radar systems deployed here | |||
*Detection priority | |||
* Nothing for specifically intercepting munitions dropped by drones | |||
**<u>Follow-up: What are general ranges for these RF jammers and other detection methods?</u> | |||
*** Very different, RF detectors or radar can have ranges of 100 meters but very much depends on the conditions | |||
***Rarely available consistently across positions | |||
***Soldiers usually listen for drones but this can be difficult when moving or during incoming/outgoing fire | |||
*** Any RELIABLE range would be beneficial but to give enough time to react a range around 20 or 30 meters would probably be the minimum | |||
<u>What are key features that make an anti-drone system effective in the field?</u> | |||
*Needs to be easy to move and hide, no bulky shapes or protruding parts, light | |||
** No space in backpacks, cars, trucks, already often full of equipment. Nothing you couldn’t carry and run with (estimate: 30kg) | |||
**Pack-up and deployment speed: exfil usually over multiple hours of packing up for drone operator squads, but for front line troops, constant movement. Anything above 30 seconds is insufficient [and that seems to already by pushing it] | |||
*No language barrier advisable [assuming: basically meaning no training] | |||
*Detection range is quite critical. A few seconds need to be given for soldiers or others near the front lines to prepare for an incoming impact | |||
** <u>Follow-up: What kind of alarm to show drone detection would be best?</u> | |||
***[...] Conclusion: sound is only real option without immediately giving away position of system, drones will also often fly over positions so to alarm soldiers with lights would risk revealing their positions where the drone otherwise would have flown past. Sound already risky. | |||
<u>What are the most significant limitations of current anti-drone systems?</u> | |||
*Apart from RF and some alternatives no real solution to drone drops on either side | |||
*Systems that are light enough to be carried by one person and simple enough to operate without extensive training are rare if any exist at all | |||
<u>General Discussion on Day-to-Day Experiences</u> | |||
*Drone attacks very unpredictable. Periods of constant drone sounds above, then days of nothing. Constant vigilance necessary, which tires you out and puts you under serious pressure. | |||
*According to people he has talked to, RF jammers and RF/radar-based detection useful but very difficult to counter an FPV drone with a payload if it is able to reach you and get close to you, but very environment-dependent, e.g. open field vs forest | |||
*Despite the conditions, there's a shared sense of purpose between locals, volunteers, etc. | |||
*Constant fundraising needed to gather enough funds to fix vehicles (such as recently the van shown on IG account), as well as equipment and supplies both for locals and soldiers | |||
=='''Users and their Requirements'''== | |||
We currently have two main usages for this project in mind, which are the following: | We currently have two main usages for this project in mind, which are the following: | ||
* Military forces facing threats from drones and projectiles. | * Military forces facing threats from drones and projectiles. | ||
* Privately-managed critical infrastructure in areas at risk of drone-based attacks. | *Privately-managed critical infrastructure in areas at risk of drone-based attacks. | ||
From our literary review and our interviews, we have determined that users of the system will require the following: | |||
* Minimal maintenance | *Minimal maintenance | ||
* High reliability | *High reliability | ||
* System should not pose additional threat to surrounding | *System should not pose additional threat to surrounding | ||
* System must be personally portable | *System must be personally portable | ||
* System should work reliably in dynamic, often extreme, environments | * System should work reliably in dynamic, often extreme, environments | ||
* System should be scalable and interoperable in concept | *System should be scalable and interoperable in concept | ||
=='''Ethical Framework'''== | |||
The purpose of this project is to develop a portable defense system for neutralizing incoming projectiles, intended as a last line of defense in combat scenarios. In such circumstances, incoming projectiles may vary, and not all require neutralization, meaning decisions will vary based on specific conditions. Here, we focus on combat situations, such as war zones where environments evolve rapidly, necessitating adaptive equipment for soldiers. The system requires advanced software capable of distinguishing potential threats, such as differentiating between birds and drones or identifying grenades dropped from high altitudes<ref>Prof. Bharath Bharadwaj B S1,Apeksha S2, Bindu NP3, S Shree Vidya Spoorthi 4, Udaya S5 (2023). ''A deep learning approach to classify drones and birds'', https://www.irjet.net/archives/V10/i4/IRJET-V10I4263.pdf</ref>. In these contexts, it must adhere to the Geneva Conventions, which emphasize minimizing harm to civilians. This principle guides the need for the device to evaluate its impact on civilians and ensure that its actions do not indirectly or directly cause harm. In addition, if the device will be used in combat environments, it must work within the International Humanitarian Law. This law states that in a combat environment you are not allowed to target civilians<ref>Making Drones to Kill Civilians: Is it Ethical? | Journal of Business Ethics (springer.com)</ref>. | |||
Consider a scenario in which a kamikaze drone targets a military vehicle. The device could intercept the drone in one of two ways: by redirecting it or triggering it to explode above the ground. To solve this problem we take a look at the three major ethical trends: virtue ethics, deontology and utilitarianism. First of with virtue ethics, in short they will choose the path that a 'virtue person' would choose<ref>Stanford Encyclopedia of philosophy, Virtue Ethics. https://plato.stanford.edu/entries/ethics-virtue/</ref>. However, if the drone would be redirected too close to a populated area, or if an aerial explosion results in scattered fragments, there’s a potential risk for civilian casualties. Such scenarios introduce a moral dilemma similar to the "trolley problem"<ref>Peter Graham, ''Thomson's Trolley problem,'' DOI: 10.26556/jesp.v12i2.227</ref>. Activating the system may protect soldiers while risking civilian harm, while inaction leaves soldiers vulnerable. Therefore, the device must be capable of directing threats to secure locations, minimizing risk to civilians and aiming to maximize overall safety. In this scenario virtue ethics would not be able to assist in choosing wisely, because we do not know what a 'virtue person' would choose with regard to redirecting a drone. Deontology is build upon strict rules to follow<ref>Ethics unwrapped, Deontology https://ethicsunwrapped.utexas.edu/glossary/deontology#:~:text=Deontology%20is%20an%20ethical%20theory%20that%20uses%20rules%20to%20distinguish,Don't%20cheat.%E2%80%9D</ref>. This could result in our example in two different rules. The first rule could be 'Do redirect the drone', the second rule could then be 'Do not redirect the drone'. This is would be an easy solution to the problem, but we still remain with the problem which rule to choose. Utilitarianism suggests that maximizing happiness often involves minimizing harm<ref>Utilitarianism meaning, ''Cambridge dictionary'' https://dictionary.cambridge.org/dictionary/english/utilitarianism</ref>. This ethical path looks at the output of its actions. Therefore it should in fact already have anticipated on whether or not there would be casualties if it is redirected and how many. For now we assume that we are not able to anticipate on this because this would probably require massive amounts data and RAM storage. With our environment in mind, we can assume that at the place of impact of the drone there will be always casualties if the drone is not directed because if not, the device would not have to be at that location. If the drone is redirected the chances of people being outside the trenches is relatively small because this person would be in plain sight of the enemy. Therefore, from a utilitarianism point of view, over a period of time the least harm would thus have been caused if we consequently redirect the drone form the location of impact. Due to this the percentage of prevented casualties outweighs the percentage of 'made' casualties. | |||
=='''Specifications''' == | |||
To guide us in our design decisions, we have to create a set of specifications that reflect more specifically our users' needs. Our design decisions include our choice of drone detection and projectile interception, and using our knowledge gotten from our literary review and interviews, we can set up a list of SMART requirements. These requirements will help us design the best prototype for our users, while also keeping within the bounds of International Humanitarian Law, as predefined by our ethical framework. | |||
{| class="wikitable" | |||
!ID | |||
!Requirement | |||
! Preference | |||
!Constraint | |||
!Category | |||
!Testing Method | |||
|- | |||
|R1 | |||
|Detect Rogue Drone | |||
|Detection range of at least 30m | |||
| | |||
|Software | |||
|Simulate rogue drone scenarios in the field | |||
|- | |||
|R2 | |||
|Object Detection | |||
|100% recognition accuracy | |||
|Detects even small, fast-moving objects | |||
|Software | |||
|Test with various object sizes and speeds in the lab | |||
|- | |||
|R3 | |||
|Detect Drone Direction | |||
|Accuracy of 90% | |||
| | |||
|Software | |||
|Use drones flying in random directions for validation | |||
|- | |||
|R4 | |||
|Detect Drone Speed | |||
|Accuracy within 5% of actual speed | |||
|Must be effective up to 20m/s | |||
|Software | |||
|Measure speed detection in controlled drone flights | |||
|- | |||
|R5 | |||
|Detect Projectile Speed | |||
|Accurate speed detection for fast projectiles | |||
|Must handle speeds above 10m/s | |||
|Software | |||
|Fire projectiles at varying speeds and record accuracy | |||
|- | |||
|R6 | |||
| Detect Projectile Direction | |||
|Accuracy within 5 degrees | |||
| | |||
|Software | |||
|Test with fast-moving objects in random directions | |||
|- | |||
|R7 | |||
|Track Drone with Laser | |||
|Tracks moving targets within a 1m² radius | |||
|Must follow targets precisely within the boundary | |||
|Hardware | |||
|Use a laser pointer to follow a flying drone in real-time | |||
|- | |||
|R8 | |||
|Can Intercept Drone/Projectile | |||
|Drone/Projectile is within the 1m² square | |||
|Must not damage surroundings or pose threat | |||
| Hardware | |||
|Test in a field, using projectiles and drones in motion | |||
|- | |||
|R9 | |||
|Low Cost-to-Intercept | |||
|Interception cost under $50 per event | |||
| | |||
|Hardware & Software | |||
|Compare operational cost per interception in trials | |||
|- | |||
|R10 | |||
|Low Total Cost | |||
| Less than $2000 | |||
|Should include all components (detection + net) | |||
|Hardware | |||
|Budget system components and assess affordability | |||
|- | |||
|R11 | |||
|Portability | |||
|System weighs less than 3kg | |||
| | |||
|Hardware | |||
|Test for total weight and ease of transport | |||
|- | |||
|R12 | |||
|Easily Deployable | |||
|Setup takes less than 5 minutes | |||
|Must require no special tools or training | |||
|Hardware & Software | |||
|Timed assembly by users in various environments | |||
|- | |||
|R13 | |||
|Quick Reload/Auto Reload | |||
|Reload takes less than 30 seconds | |||
|Must be easy to reload in the field | |||
|Hardware | |||
|Measure time to reload net launcher in real-time scenarios | |||
|- | |||
|R14 | |||
|Environmental Durability | |||
|Operates in temperatures between -20°C and 50°C | |||
|Must work reliably in rain, dust, and strong winds | |||
|Hardware | |||
|Test in extreme weather conditions (wind, rain simulation) | |||
|} | |||
=== Justification of Requirements === | |||
In the first requirement '''R1''', '''Detect Rogue Drone''', we say that we want a detection range of at least 30 meters. This range is crucial to provide soldiers or operators enough time to react to potential threats. According to B.D. (Interview 2), a detection range of 20–30 meters would be the minimum needed to give front-line soldiers a reliable chance to prepare for impact or take cover. This range aligns with the typical operational range of RF and radar detection systems, as discussed in ''The Rise of Radar-Based UAV Detection For Military'', making it a practical and achievable requirement. For '''Object Detection R2''', 90% recognition accuracy is preferred to prevent misidentifying other objects as drones, which could lead to wasted resources or unnecessary responses. ''Advances and Challenges in Drone Detection and Classification'' highlights the value of sensor fusion in improving detection accuracy, which would help the system distinguish drones from other objects in complex or cluttered environments. This accuracy is essential for military applications, where false detections could cause unnecessary alerts and distractions, and non-detections could leave the troops vulnerable. The requirements to '''Detect Drone Direction''' '''R3''' with 90% accuracy and '''Detecting Drone Speed R4''' are justified by the need to implement our projectile interception. Though from our interviews we know that most drones just drop projectiles while they are hovering, we have to be able to adapt to the scenario that the drone is moving while it drops the projectile, to calculate the trajectory of the projectile. Moreover, high-speed detection is necessary, as it can alert troops of the presence of a drone way before the drone drops a projectile, allowing them to take cover and/or shoot the drone down with their gun. Similarly, '''Detecting Projectile Speed R5''' and '''Detecting Projectile Direction''' '''R6''' with an accuracy of within 5 degrees allows the system to predict where a projectile is heading, and to calculate where the net should be shot to redirect the projectile. As suggested by F.W., focusing not only on drones but on any incoming projectiles within range is essential to providing comprehensive situational awareness and maximizing the system's value in dynamic and potentially high-risk environments. The requirement to '''Track Drone with Laser R7''' is justified by the need for precise targeting, as well as a preemptive measure before the implementation of the interception is complete. A laser tracking system that tracks within a 1m² radius allows for monitoring the movement of drones, letting the troops know where the drone is. The ability to '''Intercept Projectiles''' '''R8''' within a 1m² area ensures that the system can redirect the projectile using the 1m<sup>2</sup> net designed for our system. In Interview 1, F.W. explained that while interception methods like jamming are often preferable, kinetic interception may still be necessary for drones that pose an immediate threat. A '''Low Cost-to-Intercept''' '''R9''' of under $50 per event is essential to ensure that our a single troop or a group of troops can afford this system of last line of defense, as we want to make sure that the price to intercept a drone is as close if not lower than the cost of the drone, especially for military operations that may require frequent use. The article ''Countering the drone threat implications of C-UAS technology for Norway in an EU and NATO context'' discusses the importance of keeping interception costs low to ensure sustainability and usability in ongoing operations. By minimizing the cost per interception, the system remains practical and cost-effective for high-frequency use. Similarly, the requirement for '''Low Total Cost R10''' (under $2000) ensures accessibility to smaller or single military units. B.D. noted that cost is a major constraint, especially for front-line volunteers who often rely on fundraising to support equipment needs. A lower total cost makes the system more widely deployable and achievable for those with limited budgets. '''Portability R11''', with a target weight of under 3 kg, is crucial for ease of use and mobility. According to B.D., lightweight systems are essential for front-line soldiers, who have limited space and carry substantial gear. A portable system ensures that soldiers can transport it efficiently and integrate it into their equipment without compromising mobility or comfort. '''Ease of Deployment R12''' is also essential, with a setup time of less than 5 minutes. B.D. emphasized that in unpredictable field environments, soldiers need a system that requires minimal setup. Quick deployment is highly important in dynamic situations where immediate action is required, allowing personnel to be prepared with minimal downtime. '''Quick Reload/Auto Reload''' '''R13'''capabilities, with a reload time of under 30 seconds, enable the system to handle multiple threats in rapid succession. This requirement addresses the feedback from F.W., who noted the importance of speed in high-risk areas. Fast reloading helps maintain the system’s readiness, preventing delays in the event of multiple drone or projectile threats. Lastly, '''Environmental Durability''' '''R14''' ensures that the system operates reliably across a wide temperature range and in adverse weather conditions. ''The Rise of Radar-Based UAV Detection For Military'' stresses that systems used in real-world military applications must withstand rain, dust, and extreme temperatures. Durability in harsh environments increases the system's utility, ensuring it remains effective regardless of weather or climate conditions. | |||
=='''Detection'''== | |||
=== Drone Detection=== | |||
==== The Need for Effective Drone Detection==== | |||
With the rapid advancement and production of unmanned aerial vehicles (UAV), particularly small drones, new security challenges have emerged for the military sector.<ref name=":0" /> Drones can be used for surveillance, smuggling, and launching explosive projectiles, posing threats to infrastructure and military operations.<ref name=":0" /> Within our project we will be mostly looking at the threat of drones launching explosive projectiles. We have as an objective to develop a portable, last-line-of-defense system that can detect drones and intercept and/or redirect the projectiles they launch. An important aspect of such a system is the capability to reliably detect drones in real-time, in possibly dynamic environments.<ref name=":1" /> The challenge here is to create a solution that is not only effective but also lightweight, portable, and easy to deploy. | |||
====Approaches to Drone Detection==== | |||
Numerous approaches have been explored in the field of drone detection, each with its own set of advantages and disadvantages.<ref name=":2" /><ref name=":1" /> The main methods include radar-based detection, radio frequency (RF) detection, acoustic-based detection, and vision-based detection.<ref name=":0" /><ref name=":2" /> It is essential for our project to analyze these methods within the context of portability and reliability, to identify the most suitable method, or combination of methods. | |||
=====''Radar-Based Detection''===== | |||
Radar-based systems are considered as one of the most reliable methods for detecting drones.<ref name=":2" /> Radar systems transmit short electromagnetic waves that bounce off objects in the environment and return to the receiver, allowing the system to detect the object's attributes, such as range, velocity, and size of the object.<ref name=":2" /><ref name=":1" /> Radar is especially effective in detecting drones in all weather conditions and can operate over long ranges.<ref name=":0" /><ref name=":2" /> Radars, such as active pulse-Doppler radar, can track the movement of drones and distinguish them from other flying objects based on the Doppler shift caused by the motion of propellers (the micro-Doppler effect).<ref name=":0" /><ref name=":1" /><ref name=":2" /> | |||
Despite its effectiveness, radar-based detection systems come with certain limitations that must be considered. First, traditional radar systems are rather large and require significant power, making them less suitable for a portable defense system.<ref name=":2" /> Additionally, radar can struggle to detect small drones flying at low altitudes due to their limited radar cross-section (RCS), particularly in cluttered environments like urban areas.<ref name=":2" /> Millimeter-wave radar technology, which operates at high frequencies, offers a potential solution by providing better resolution for detecting small objects, but it is also more expensive and complex.<ref name=":2" /><ref name=":0" /> | |||
===== ''Radio Frequency (RF)-Based Detection''===== | |||
Another common method is detecting drones through radio frequency (RF) analysis.<ref name=":0" /><ref name=":1" /><ref name=":2" /><ref name=":3" /> Most drones communicate with their operators via RF signals, using the 2.4 GHz and 5.8 GHz bands.<ref name=":0" /><ref name=":2" /> RF-based detection systems monitor the electromagnetic spectrum for these signals, allowing them to identify the presence of a drone and its controller on these RF bands.<ref name=":2" /> One advantage of RF detection is that it does not require line-of-sight, implying that the detection system does not need to have a view of the drone.<ref name=":2" /> It can also operate over long distances, making it effective in a large pool of scenarios.<ref name=":2" /> | |||
However, RF-based detection systems do have their limitations. They are unable to detect drones that do not rely on communication with another operator, as in autonomous drones.<ref name=":1" /> Also, the systems are less reliable in environment where many RF signals are presents, such as cities.<ref name=":2" /> Therefore in situations where high precision and reliability are a must, RF-based detection might not be too suitable. | |||
=====''Acoustic-Based Detection''===== | |||
Acoustic detection systems rely on the unique sound signature produced by drones, patricularly the noise generated by their propellers and motors.<ref name=":2" /> These systems use highly sensitive microphones to capture these sounds and then analyze the audio signals to identify the presence of a drone.<ref name=":2" /> The advantage of this type of detection is that it is rather low cost and also does not require line-of-sight, therefore this type of detection is mostly used for detecting drones behind obstacles in non-open spaces.<ref name=":2" /><ref name=":0" /> | |||
However, it also has its disadvantages. In environments with a lot of noise, as in a battlefields, these systems are not as effective.<ref name=":1" /><ref name=":2" /> Additionally, some drones are designed to operate silently.<ref name=":1" /> Also, they only work on rather short range, since sound weakens over distance.<ref name=":2" /> | |||
=====''Vision-Based Detection'' ===== | |||
Vision-based detection systems use camera, either in the visible or infrared spectrum, to detect drones visually.<ref name=":0" /><ref name=":2" /> These system rely on image recognition algorithms, often by use of machine learning.<ref name=":2" /><ref name=":3" /> Drones are then detected based on their shape, size and movement.<ref name=":3" /> The main advantage of this type of detection is that the operators themselves will be able to confirm the presence of a drone, and are able to differentiate between a drone and other objects such as birds.<ref name=":2" /> | |||
However, there are also disadvantages when it comes to vision-based detection systems.<ref name=":1" /><ref name=":2" /> These systems are highly dependent on environmental conditions, they need a clear line-of-sight and good lightning, additionally weather conditions can have an impact on the accuracy of the systems.<ref name=":1" /><ref name=":2" /> | |||
====Best Approach for Portable Drone Detection==== | |||
For our project, which focuses on a portable system, the ideal drone detection method must balance between effectiveness, portability and ease of deployment. Based on this, a sensor fusion approach appear to be the most appropriate.<ref name=":2" /> | |||
===== ''Sensor Fusion Approach'' ===== | |||
Given the limitations of each individual detection method, a sensor fusion approach, which would combine radar, RF, acoustic and vision-based sensors, offers the best chance of achieving reliable and accurate drone detection in a portable system.<ref name=":2" /> Sensor fusion allows the strengths of each detection method to complement the weaknesses of the others, providing more effective detection in dynamic environments.<ref name=":2" /> | |||
#'''Radar Component:''' A compact, millimeter-wave radar system would provide reliable detection in different weather conditions and across long ranges.<ref name=":1" /> While radar systems are traditionally bulky, recent advancements make it possible to develop portable radar units that can be used in a mobile systems.<ref name=":2" /> These would most likely be less effective, therefore to compensate a sensor fusion approach would be used.<ref name=":2" /> | |||
#'''RF Component:''' Integrating an RF sensor will allow the system to detect drones communicating with remote operators.<ref name=":2" /> This component is lightweight and relatively low-power, making it ideal for a portable system.<ref name=":2" /> | |||
#'''Acoustic Component:''' Adding acoustic sensors can help detect drones flying at low altitudes or behind obstacles, where rader may struggle.<ref name=":0" /><ref name=":2" /> Also this component is mainly just a microphone and the rest is dependent on software, and therefore also ideal for a portable system.<ref name=":2" /> | |||
#'''Vision-Based Component:''' A camera system equipped with machine learning algorithms for image recognition can provide visual confirmation of detected drones.<ref name=":3" /><ref name=":2" /> This component can be added by use of lightweight, wide-angle camera, which again does not restrict the device from being portable.<ref name=":2" /> | |||
====Conclusion==== | |||
To achieve portability in our system we have to restrict certain sensors and/or components, therefore to still achieve effectivity when it comes to drone detection, the best approach is sensor fusion. The system would integrate radar, RF, acoustic and vision-based detection. These together would compensate for each others limitations resulting in an effective, reliable and portable system. | |||
=== Sensor Fusion === | |||
When it comes to detection, sensor fusion is essential for integrating inputs from multiple sensos types to achieve higher accuracy and reliability in dynamic conditions. Which in our case are radar, radio frequency, acoustic and vision-based. Sensor fusion can occur at different stages, with '''early''' and '''late fusion'''.<ref name=":5">Svanström, F.; Alonso-Fernandez, F.; Englund, C. Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities. ''Drones'' 2022, ''6'', 317. https://doi.org/10.3390/drones6110317</ref> | |||
'''Early fusion''' integrates raw data from various sensors at the initial stages, creating a unified dataset for procession. This approach captures the relation between different data sources, but does require extensive computational recourses, especially when the data of the different sensors are not of the same type, consider acoustic and visual data.<ref name=":5" /> | |||
'''Late fusion''' integrates the processed outputs/decisions of each sensor. This method allows each sensor the apply its own processing approach before the data is fused, making it more suitable for systems where sensor outputs vary in type. According to recent studies in UAV tracking, late fusion improves robustness by allowing each sensor to operate indepentently under its optimal conditions.<ref name=":5" /><ref name=":6">Samaras, S.; Diamantidou, E.; Ataloglou, D.; Sakellariou, N.; Vafeiadis, A.; Magoulianitis, V.; Lalas, A.; Dimou, A.; Zarpalas, D.; Votis, K.; et al. Deep Learning on Multi Sensor Data for Counter UAV Applications—A Systematic Review. ''Sensors'' 2019, ''19'', 4837. https://doi.org/10.3390/s19224837</ref> | |||
Therefore, we can conclude that for our system late fusion is best suited. | |||
==== Algorithms for Sensor Fusion in Drone Detection ==== | |||
# '''Extended Kalman Filter (EKF)''': EKF is widely used in sensor fusion for its ability to handle nonlinear data, making it suitable for tracking drones in real-time by predicting trajectory despite noisy inputs. EKF has proven effective for fusing data from radar and LiDAR, which is essential when estimating an object's location in complex settings like urban environments.<ref name=":6" /> | |||
# '''Convolutional Neural Networks (CNNs)''': Primarily used in vision-based detection, CNNs process visual data to recognize drones based on shape and movement. CNNs are particularly useful in late-fusion systems, where they can add a visual confirmation layer to radar or RF detections, enhancing overall reliability.<ref name=":5" /><ref name=":6" /> | |||
# '''Bayesian Networks''': These networks manage uncertainty by probabilistically combining sensor inputs. They are highly adaptable in scenarios with varied sensor reliability, such as combining RF and acoustic data, making them suitable for applications where conditions can impact certain sensors more than others.<ref name=":5" /> | |||
# '''Decision-Level Fusion with Voting Mechanisms''': This algorithmic approach aggregates sensor outputs based on their agreement or “votes” regarding an object's presence. This simple yet robust method enhances detection accuracy by prioritizing consistent detections across sensors.<ref name=":5" /> | |||
# '''Deep Reinforcement Learning (DRL)''': DRL optimizes sensor fusion adaptively by learning from patterns in sensor data, making it particularly suited for applications requiring dynamic adjustments, like drone tracking in unpredictable environments. DRL has shown promise in managing fusion systems by balancing multiple inputs effectively in real-time applications.<ref name=":5" /><ref name=":6" /> | |||
These algorithms have demonstrated efficacy across diverse sensor configurations in UAV applications. EKF and Bayesian networks are particularly valuable when fusing data from radar, RF, and acoustic sources, given their ability to manage noisy and uncertain data, while CNNs and voting mechanisms add reliability in vision-based and multi-sensor contexts. However, without testing no conclusions could be made on which algorithms can be applied effectively and which ones would work best.<ref name=":5" /><ref name=":6" /> | |||
=='''Interception'''== | |||
=== Protection against Projectiles Dropped by Drones === | |||
In modern warfare and security scenarios, small drones have emerged as cheap and effective tools capable of carrying payloads that can be dropped over critical areas or troops. Rather than intercepting the drones themselves—which would require a high-cost interception method—we shifted our focus to intercepting projectiles they drop, as these are usually artillery shells have an easily computable trajectory, and are far lighter and thus cheaper and easier to intercept. By targeting these projectiles as a last line of defense, our system provides a more cost-effective to neutralize potential threats only when necessary. This approach minimizes the resources spent on countering non-threatening drones while concentrating defensive efforts on imminent, high-risk projectiles, as a last resource for individual troops to remain safe. | |||
===Key Approaches to Interception=== | |||
==== Kinetic Interceptors: ==== | |||
Kinetic methods involve the direct impact destruction or incapacitation of projectiles dropped by drones. These systems are designed for medium- to long-range distances and include missile-based and projectile-based interception systems. For example, the Raytheon Coyote Block 2+ missile is a kinetic interceptor designed to counter small aerial threats, such as drone projectiles. The Coyote’s design allows it to engage moving targets with precision and agility. Originally developed as a UAV, the Coyote has been adapted for use as a missile system, with each missile costing approximately $24,000, which is a disparagingly high cost to destroy relatively inexpensive threats like drone-deployed projectiles<ref name=":7">''Coyote UAS''. (n.d.). <nowiki>https://www.rtx.com/raytheon/what-we-do/integrated-air-and-missile-defense/coyote</nowiki></ref>. The precision and effectiveness of kinetic systems like the Coyote make them particularly valuable for high-priority and threatening targets, despite the high cost-to-intercept. | |||
==== Electronic Warfare (Jamming and Spoofing) ==== | |||
Electronic warfare techniques, such as radio frequency jamming (RF) and GNSS jamming, disrupt the control signals of drones, causing them to lose connectivity to their controller or to the satellite signal. Spoofing, on the other hand, involves hijacking the communication system of the drone, giving it instructions that benefit you, such as releasing the projectile in a safe place. While jamming is non-lethal, it may affect other electronics nearby and is ineffective against autonomous drones that don’t rely on external signals. For example, DroneShield’s DroneGun MKIII is a portable jammer capable of disrupting RF and GNSS signals up to 500 meters away. By targeting a drone’s control signals, the DroneGun can cause the drone to lose connection, descend, or prematurely release its payload, which can then be intercepted by other defenses. However, RF jamming can interfere with nearby electronics, making it most suitable for use in remote or controlled environments, to minimize the collateral damage to your own electronic infrastructure. This system has demonstrated effectiveness in remote military applications and large open spaces where the risk of collateral interference is minimized<ref name=":8">''DroneGun MK3: Counterdrone (C-UAS) Protection — DroneShield''. (n.d.). Droneshield. <nowiki>https://www.droneshield.com/c-uas-products/dronegun-mk3</nowiki></ref><ref name=":9">Michel, A. H., The Center for the Study of the Drone at Bard College, Aasiyah Ali, Lynn Barnett, Dylan Sparks, Josh Kim, John McKeon, Lilian O’Donnell, Blades, M., Frost & Sullivan, Peace Research Institute Oslo, Norwegian Ministry of Defense, Open Society Foundations, Pvt. James Newsome, & Senior Airman Kaylee Dubois. (2019). ''COUNTER-DRONE SYSTEMS'' (D. Gettinger, Isabel Polletta, & Ariana Podesta, Eds.; 2nd Edition). <nowiki>https://dronecenter.bard.edu/files/2019/12/CSD-CUAS-2nd-Edition-Web.pdf</nowiki></ref>. | |||
==== Directed Energy Weapons (Lasers and Electromagnetic Pulses) ==== | |||
Directed energy systems like lasers and electromagnetic pulses (EMP) are designed to disable dropped projectiles by damaging electrical components or destroying them outright. Lasers provide precision and instant engagement with minimal collateral, although they are very expensive and lose effectiveness due to environmental conditions like rain or fog. EMP systems can disable multiple projectiles simultaneously but may interfere with other electronics in the vicinity. For example, Israel’s Iron Beam is a high-energy laser system developed by Rafael Advanced Defense Systems for intercepting aerial threats, including projectiles dropped by drones, but mainly other missiles. Unlike kinetic interceptors, Iron Beam offers a lower-cost engagement per interception. EMPs, on the other hand, provide a broad area effect, allowing simultaneous disabling of multiple projectiles. However, EMP systems may also disrupt nearby electronics, limiting their use in civilian-populated areas<ref name=":10">Hecht, J. (2021, June 24). Liquid lasers challenge fiber lasers as the basis of future High-Energy weapons. ''IEEE Spectrum''. <nowiki>https://spectrum.ieee.org/fiber-lasers-face-a-challenger-in-laser-weapons</nowiki></ref>. | |||
==== Net-Based Capture Systems ==== | |||
Net-based systems use physical nets to capture projectiles mid-flight, stopping them from reaching their target by redirecting them. Nets can be launched from ground platforms or other drones, effectively intercepting low-speed, low-altitude projectiles. This method is non-lethal and minimizes collateral damage, though it has limitations in range and reloadability. For example, Fortem Technologies’ DroneHunter F700 is a specialized drone designed to intercept other drones or projectiles by deploying high-strength nets. The DroneHunter captures aerial threats, stopping them from completing their intended path, thus minimizing potential damage. However, net-based systems have limitations in range and require reloading unless automated, which can slow response time during scenarios where the threat of drones dropping projectiles is constant<ref name=":11">DroneHunter® F700. (2024, October 24). ''Fortem Technologies''. <nowiki>https://fortemtech.com/products/dronehunter-f700/</nowiki></ref>. | |||
==== Geofencing ==== | |||
Geofencing involves creating virtual boundaries around sensitive areas using GPS coordinates. Drones equipped with geofencing technology are automatically programmed to avoid flying into restricted zones. This method is proactive, preventing any drone to even get close to any troops, but can be bypassed by modified or non-compliant drones. DJI, a major drone manufacturer, has integrated geofencing technology into its drones, preventing its users from entering or flying in zones that are restricted zones. This feature allows DJI to choose what areas drones can and cannot enter, and provides a non-lethal preventive measure. However, geofencing requires drone manufacturer cooperation, and modified or non-compliant drones can bypass these restrictions, making it unreliable <ref name=":12">''DJI FlySafe''. (n.d.). <nowiki>https://fly-safe.dji.com/nfz/nfz-query</nowiki></ref>. | |||
===Objectives of Effective Drone Neutralization=== | |||
When designing or selecting a drone interception system, several key objectives must be prioritised: | |||
#'''Low Cost-to-Intercept''': Interception costs are critical, as small, cheap drones can carry projectiles that may cost significantly less than the interception method itself. Low-cost systems, like net-based options, are preferred for frequent engagements. Conversely, more expensive kinetic methods may be necessary for high-speed or armored projectiles. Raytheon’s Coyote missiles is an example of the cost tradeoff of kinetic systems and highlight the economic considerations that come into play in military versus civilian contexts<ref name=":7" />. | |||
#'''Portability''': Interception systems should ideally be lightweight, collapsible, and transportable across various settings. Portable systems, like the DroneGun MKIII jammer and net-based launchers, enable rapid setup and adaptability to various operational environments, making them valuable in mobile defense scenarios<ref name=":8" />. | |||
#'''Ease of Deployment''': Quick deployment is essential in dynamic scenarios like military operations or large-scale events. For example, drone-based net systems and RF jammers mounted on mobile units offer flexible deployment options, allowing for rapid response in fast-moving situations<ref name=":9" />. | |||
#'''Quick Reloadability or Automatic Reloading''': In high-threat environments, interception systems with rapid or automated reloading capabilities ensure continuous defense. Systems like lasers and RF jammers support quick re-engagement, while net throwers and kinetic projectiles may require manual reloading, potentially reducing efficiency in sustained threats<ref name=":13">''Iron Beam laser weapon, Israel''. (2023, November 1). Army Technology. <nowiki>https://www.army-technology.com/projects/iron-beam-laser-weapon-israel/</nowiki></ref>. | |||
#'''Minimal Collateral Damage''': In urban or civilian areas, minimizing collateral damage is of utmost importance. Non-lethal interception methods, such as jamming, spoofing, and net-based systems, provide effective solutions that neutralize threats without excessive environmental or infrastructural impact. Systems like the Fortem DroneHunter F700 illustrate the potential for non-destructive interception in urban areas<ref name=":11" />. | |||
===Evaluation of Drone Interception Methods=== | |||
==== Pros and Cons of Drone Interception Methods==== | |||
#'''Jamming (RF/GNSS)''' | |||
#* Pros: Jamming effectively disrupts communications between drones and operators, often forcing premature payload drops. Non-destructive and widely applicable, jamming can target multiple drones simultaneously, making it well-suited to civilian defense applications<ref name=":9" />. | |||
#*Cons: Jamming is limited in effectiveness against autonomous drones and can interfere with nearby electronics, posing a risk in urban areas where collateral electronic disruption can impact civilian infrastructure. | |||
#'''Net Throwers''' | |||
#*Pros: Non-lethal and environmentally safe, nets can physically capture projectiles without destroying them, making them ideal for urban settings where collateral damage is a concern. | |||
#*Cons: Effective primarily against slow-moving, low-altitude projectiles, net throwers require manual reloading between uses, which limits their response time during sustained threats unless automated. | |||
#'''Missile Launch''' | |||
#*Pros: High precision and range, missile systems like Raytheon's Coyote are effective for engaging fast-moving or long-range targets and are ideal in military settings for large-scale aerial threats<ref name=":7" />. | |||
#*Cons: High cost per missile, risk of collateral damage, and infrastructure requirements restrict missile use to defense zones rather than civilian settings. | |||
#'''Lasers''' | |||
#* Pros: Laser systems are silent, precise, and capable of engaging multiple targets without producing physical debris. This makes them valuable in urban environments where damage control is essential<ref name=":13" />. | |||
#*Cons: Lasers are costly, sensitive to environmental conditions like fog and rain, and have high energy demands that complicate portability, limiting their field application<ref name=":10" />. | |||
#'''Hijacking''' | |||
#*Pros: Allows operators to take control of drones without destroying them. It’s a non-lethal approach, ideal for situations where it’s essential to capture the drone intact. | |||
#*Cons: Hijacking poses collateral risks for surrounding electronics, has limited range, and is operationally complex in active field environments, requiring specialized training and equipment. | |||
#'''Spoofing''' | |||
#*Pros: Non-destructively diverts drones from sensitive areas by manipulating signals, suitable for deterring drones from critical zones<ref name=":9" />. | |||
#* Cons: Technically complex and less effective against drones with advanced anti-spoofing technology, requiring specialized skills and equipment. | |||
#'''Geofencing''' | |||
#*Pros: Geofencing restricts compliant drones from entering sensitive zones, creating a non-lethal preventive barrier that offers permanent coverage<ref name=":12" />. | |||
#*Cons: Reliance on manufacturers for integration and potential circumvention by modified drones limits geofencing as a standalone defense measure in high-risk scenarios. | |||
===Pugh Matrix=== | |||
The Pugh Matrix is a decision-making tool used to evaluate and compare multiple options against a set of criteria. By systematically scoring each option across various criteria, the Pugh Matrix helps to identify the most balanced or optimal solution based on the chosen priorities. In this report, I created a Pugh Matrix to assess different interception methods for countering projectiles dropped by drones. | |||
Each method was evaluated across seven key criteria: Cost-to-Intercept, Portability, Ease of Deployment, Reloadability, Minimum Collateral Damage, and Effectiveness. For each criterion, the methods were scored as Low (1 point), Medium (2 points), or High (3 points), reflecting their relative strengths and limitations in each area. The scores were then totaled to provide an overall assessment of each method’s viability as a counter-projectile solution. This approach enables a comprehensive comparison, highlighting methods that provide a balanced combination of cost-efficiency, ease of use, and effectiveness in interception. | |||
{| class="wikitable" | |||
|+ | |||
! Method | |||
!Minimal Cost-to-intercept | |||
!Portability | |||
!Ease of Deployment | |||
!Reloadability | |||
!Minimum Collateral Damage | |||
!Effectiveness | |||
!Total Score | |||
|- | |||
|Jamming (RF/GNSS) | |||
|Medium | |||
|High | |||
|High | |||
|High | |||
|High | |||
|Medium | |||
|10 | |||
|- | |||
|Net Throwers | |||
|High | |||
|High | |||
|High | |||
|Medium | |||
|High | |||
|High | |||
|11 | |||
|- | |||
|Missile Launch | |||
|Low | |||
|Low | |||
|Medium | |||
|Low | |||
|Low | |||
| High | |||
|5 | |||
|- | |||
|Lasers | |||
|High | |||
|Medium | |||
|Medium | |||
|High | |||
|High | |||
|High | |||
|8 | |||
|- | |||
|Hijacking | |||
|High | |||
|High | |||
|Medium | |||
|Low | |||
|High | |||
|Medium | |||
|8 | |||
|- | |||
|Spoofing | |||
|Medium | |||
|High | |||
|Medium | |||
|Medium | |||
|High | |||
|Medium | |||
|8 | |||
|- | |||
|Geofencing | |||
|High | |||
|High | |||
|High | |||
|High | |||
|High | |||
|Low | |||
|10 | |||
|} | |||
The resulting scores can be found in the Pugh Matrix below, where Net Throwers scored the highest with 11 points, indicating strong performance across several criteria, particularly in minimizing collateral damage and cost-effectiveness. Other methods, such as Jamming and Geofencing, also scored well, while missile-based solutions, despite high effectiveness, scored lower due to high costs and limited portability. Consequently, we will be focusing on Net Throwers as our main interception mechanism. | |||
=='''Path Prediction of Projectile'''<ref>- Autonomous Ball Catcher Part 1: Hardware — Baucom Robotics</ref><ref>- Ball Detection and Tracking with Computer Vision - InData Labs</ref><ref>- Detecting Bullets Through Electric Fields – DSIAC</ref><ref>- An Introduction to BYTETrack: Multi-Object Tracking by Associating Every Detection Box (datature.io)</ref><ref>- Online Trajectory Generation with 3D camera for industrial robot - Trinity Innovation Network (trinityrobotics.eu)</ref><ref>- Explosives Delivered by Drone – DRONE DELIVERY OF CBNRECy – DEW WEAPONS Emerging Threats of Mini-Weapons of Mass Destruction and Disruption ( WMDD) (pressbooks.pub)</ref><ref>- Deadliest weapons: The high-explosive hand grenade (forcesnews.com)</ref><ref>- SM2025.pdf (myu-group.co.jp)</ref><ref>- Trajectory estimation method of spinning projectile without velocity input - ScienceDirect</ref><ref>- An improved particle filtering projectile trajectory estimation algorithm fusing velocity information - ScienceDirect</ref><ref>- (PDF) Generating physically realistic kinematic and dynamic models from small data sets: An application for sit-to-stand actions (researchgate.net)</ref><ref>https://kestrelinstruments.com/mwdownloads/download/link/id/100/</ref><ref>Normal and Tangential Drag Forces of Nylon Nets, Clean and with Fouling, in Fish Farming. An Experimental Study (mdpi.com)</ref><ref>A model for the aerodynamic coefficients of rock-like debris - ScienceDirect</ref><ref>Explosives Delivered by Drone – DRONE DELIVERY OF CBNRECy – DEW WEAPONS Emerging Threats of Mini-Weapons of Mass Destruction and Disruption ( WMDD) (pressbooks.pub)</ref><ref>Influence of hand grenade weight, shape and diameter on performance and subjective handling properties in relations to ergonomic design considerations - ScienceDirect</ref><ref>'Molotov Cocktail' incendiary grenade | Imperial War Museums (iwm.org.uk)</ref><ref>My Global issues - YouTube</ref><ref>4 Types of Distance Sensors & How to Choose the Right One | KEYENCE America</ref>== | |||
===Theory:=== | |||
Catching a projectile requires different steps. At first the particle has to be detected, after which its trajectory has to be determined. If we know how the projectile is moving in space and time the net can be shot to catch the projectile. However based on the distance of the projectile it takes different amounts of time for the net to reach the projectile. In this time the projectile has moved to a different location. So the net must be shot to a position where the projectile will be in the future such that they collide. | |||
Since projectiles do not make sound and do not emit RF waves, they are not as easy to detect as drones. For this part the assumption will be made that the projectile is visible. Making the system also detect projectiles which are not visible would probably be possible but this would complicate things a lot. The U.S. army uses electronic fields which can detect bullets passing. Something similar could be used to detect projectiles which are not visible, but this will not be done in this project due to the complexity. | |||
To detect a projectile a camera with tracking software is used. Drones will have to be detected by this camera. This will be done by training an AI model for drone detection. | |||
Now that the projectile is in sight its trajectory has to be determined. The speed of the projectile should only slow down due to friction in the air and speed up due to gravity. For a first model air friction can be neglected to get a good approximation of the flight of the projectile. Since not every projectile has the same amount of air resistance, the best friction coefficient should be found experimentally by dropping different projectiles. The best friction coefficient is when most projectiles are caught by the system. An improvement for this is to have different pre-assigned values for friction coefficients for different sizes of projectiles. Since surface area plays a big role in the amount of friction a projectile experiences, this is a reasonable thing to do. | |||
With the expected path of the projectile known, the net can be launched to catch the projectile midair. Basic kinematics can give accurate results for these problems. Also, the problem can be seen as a 2D problem. Since we only protect against projectiles coming towards the system, we can always define a plane such that the trajectory of the projectile and the system are in the same plane, making the problem two dimensional. If the path of the projectile would exceed the plane and become a three dimensional problem the system does not need to protect against this projectile as it does not form a threat, since the system is in the (2D) plane. | |||
===Calculation: === | |||
[[File:Imagedfdsfsdfdsf.png|thumb|223x223px|left|PP1: Output 2D model]] | |||
A Mathematica script is written which calculates at this angle the net should be shot. The script now makes use of made up values which have to be determined experimentally based on the hardware that is used. For example, the speed at which the net is shot should be tested and changed in the code to get a good result. The distance and speed of the projectile can be determined using sensors on the device. The output of the Mathematica script is shown in figure PP1. It gives the angle for the net to be shot at as well as a trajectory of the net and projectile to visualize how the interception will happen. | |||
===Accuracy:=== | |||
[[File:Imagevsdvsdvdsvsd.png|thumb|319x319px|PP2: Calculations accuracy]] | |||
The height at which projectiles are dropped can be estimated by looking at footage of projectiles dropping. The height can be easily determined by assuming the projectile falls in vacuum, this represents reality really well. The height is given by: 0.5*g*t^2. Using a YouTube video<ref>Ukrainian Mountain Battalion drop grenades on Russian forces with weaponised drone (youtube.com)</ref> as data, it can be seen that almost every drop takes at least 4 seconds. This means that the projectiles are dropped from at least 78.5m. If we catch the projectile at two thirds of its fall, still having plenty of height to be redirected safely, and the net is 1 by 1 meter (so half a meter from its center to the closest side of the net), the projectile must not be dropped more than 0.75 meter (See figure PP2 for the calculation) next to us (outside of the plane) since the system would not catch this, if everything else went perfect. Even if the projectile would be dropped 0.7 meter next to the device, the net would hit the projectile with the side, which does not guarantee that the projectile will stay in the net. | |||
An explosive projectile will do damage even when 0.75 meters away from a person. This means that the previously made assumption, where it was assumed that a 2D model would be fine, since everything happens in a plane, does not fulfill all our needs. Enemy drones are not accurate to the centimeter, since explosive projectiles, like grenades, can kill a person even when 10 meters away. This means that for better results a 3D model should be used. | |||
===3D model:=== | |||
[[File:3D model.png|thumb|502x502px|PP3: Output 3D model]] | |||
It was tried to replicate the 2D model in 3D, but this did not work out with the same generality. For this reason some extra assumptions were made. These assumptions are based on reality and therefor still valid for this system. The only thing this changes is the generality of the model, where it could be used in multiple different cases instead of only projectiles dropped from drones. | |||
In the 2D model the starting velocity of the projectile could be changed. However, in reality, drones do not have a launching mechanism and simply drop the projectile without any starting velocity. This means that the projectile will drop straight down (except some sideways movement due to external forces like wind). This was noted after watching lots of drone warfare video footage, where it was also noted that drones do not usually crash into people, but mainly into tanks since for tanks an accurate hit is required between the base and the turret of the tank. For people, drones drop the projectile from a standstill (to get the required aim). This simplification also makes the 2D model valid again, since there is no sideward movement in the projectile, it will never leave the plane and we can create between the path of the projectile and the mechanism which shoots the net. | |||
Since this mechanism works in the real world (3D), it is decided to plot the 2D model at a required angle in 3D so there is a good representation of how the mechanism will work. The new model also gives the required shooting angle and then it shows the path of the net and projectile in 3D. To get further insight, the 2D trajectory of the net and projectile is also plotted, this can be seen in figure PP3. | |||
===Accuracy 3D model:=== | |||
[[File:Imagevvd.png|thumb|368x368px|PP5: Interception with wind]] | |||
[[File:Imageddd.png|left|thumb|355x355px|PP4: Calculations weight net]] | |||
The 3D model which is now set up only works in a “perfect” world, where there is no wind, no air resistance or any other external factors which may influence the path of the projectile and the net. Also we assume that the system knows where the drone is with infinite accuracy. This is in reality simply not true, but it is important to know how close this model replicates reality and if it can be used. | |||
Wind plays a big role in the path of the projectile and of the net. It is important that the model also functions under these circumstances. In order to determine the acceleration of the projectile and net the drag force on both objects must be determined. Two important factors where the drag force depends on are the drag coefficient and the frontal area of the objects. Since different projectiles are used during warfare, like hand grenades or Molotov cocktails, it is unknown what the exact drag coefficient or frontal area is or the projectile. After a dive in literature it was decided to take an average value for the drag coefficient and the frontal area since these values lied on the same spectrum. For the frontal area this could be predicted since the drones are only able to carry objects of limited size. After some calculations (see figure PP4) it was found that if the net (including the weights on the corners) weighs 3kg, the acceleration of the projectile and net due to wind effects is identical leading to still a perfect interception, which can be seen in figure PP5. This is based on literature values, for a later stage it is necessary to find the exact drag coefficient and surface area of the net and change the weight accordingly. As for projectiles which do not exactly satisfy the found drag coefficient or surface area, it is found with the use of the model that differences up to 50% of the used values do not influence the projectile so much that the interception misses. This range includes almost all theoretical values found for the different projectiles, making the model highly reliable under the influence of wind. | |||
An uncertainty within the system is the exact location of the drone. We aim to accurately know where the drone, and thus the projectile is, but in reality this infinite accurate location is unachievable, but we can get close. The sensors in the system must be optimized to locate the drone as good as possible. Luckily there are sensors which are able to achieve high accuracy, for example a LiDAR sensor which has a range of 2000m and is accurate to 2cm. The 2000m range is well within the range that our system operates and the accuracy of 2cm is way smaller than the size of the net (100cm by 100cm) which should not cause problems for the interception.[[File:Labeled system view.png|thumb|391x391px|Labeled System Overview]] | |||
=='''Mechanical Design'''== | |||
=== <u>Introduction</u> === | |||
To thoroughly detail the mechanical design and the design choices made throughout the design process, each component will be discussed individually. For each component, a brief description will be provided as to why this feature is necessary in the system, followed by a number of possible component design choices, and finally which design choice was chosen and why. To conclude, an overview of the system as a whole will be given along with a short evaluation and some next steps for improvements to the system beyond the scope of this project. | |||
=== <u>Kinetic Interception Charge (A)</u> === | |||
'''<u>Feature Description</u>''' | |||
Kinetic interception techniques include a wide range of drone and projectile interception strategies that, as the name suggests, contain a kinetic aspect, such as a shotgun shell or a flying object. This method of interception relies on a physical impact or near-impact knocking the target off course or disabling it entirely. The necessity for kinetic interception capabilities was made clean during the first target market interview, in which the possibility of jamming and (counter-)electronic warfare was discussed and the consequent necessity to be able to intercept drones and dropped projectiles in a non-electronic fashion. | |||
'''<u>Possible Design Choices</u>''' | |||
* Shotgun shells | |||
** Using a mounted shotgun to kinetically intercept drones and projectiles is a possibility, proven also by the fact that shotguns are used as last lines of defense by troops at fixed positions as well as during transport <ref>Across, M. (2024, September 27). ''Surviving 25,000 Miles Across War-torn Ukraine | Frontline | Daily Mail''. YouTube. <nowiki>https://youtu.be/kqKGYn13MeM?si=VPhO7jFG0sHQQXiW</nowiki></ref>. Shotguns are also relatively widely available and the system could be designed to be adaptable to various shotgun types. However, the system would be very heavy and would need to be able to withstand the very powerful recoil of a shotgun firing. | |||
* Net charge | |||
** A net is a light-weight option which can be used to cover a relatively large area-of-effect consistently. This would make the projectile or drone much easier to hit, and thus contribute to the reliability of the system. The light weight would allow it to be rotated and fired faster than heavier options, as well as reducing the weight of any additional charges carried for the system, a critical factor mentioned in both the first and the second target market interviews. | |||
* Air-burst ammunition | |||
** This type of ammunition is designed to detonate in the air close to the incoming projectile or drone. This ammunition is very effective at preventing these hazards of reaching their targets, however very sharply increases the cost-to-intercept, a concept which was also previously introduced in the interviews. Furthermore, it is the most expensive out of all three of these options, which makes it less suitable for the application of this system. | |||
'''<u>Selected Design</u>''' | |||
The chosen kinetic interception charge comprises a light-weight net of a square surface area of 1m^2. The net has thin, weighted disks on the end, which are stacked on top of one another to form a cylindrical charge which can be loaded into the spring-loaded or pneumatically-loaded turret. This charge is then fired at incoming targets, and when done so, the net spreads out and catches the projectile where the path prediction algorithm calculates it will be by the time the net reaches it. This decision is based on the net's light weight, low cost-to-intercept, quick reloading capability, and reliability for the diversion of a projectile caught within its surface area. | |||
=== <u>Rotating Turret (B)</u> === | |||
<u>'''Feature Description'''</u> | |||
The rotating turret system is designed to provide a 360-degree rotation freedom for the attached net charge, allowing for the engagement of threats from any direction. Not just does the turret rotate 360 degrees in the horizontal plane, but also 90 degrees in the vertical plane, i.e. from completely horizontal to completely vertical. This capability is critical for obvious reasons, but especially in the context of fast-paced wars with ever-changing frontlines, such as the one in Ukraine, because of the risk of drone attacks coming from all around any given position. The versatility of a rotating turret enhances the system's ability to respond to these aerial threats significantly. Two stepper motors fitted with rotary encoders rapidly move the turret, one for each of the planes of rotation. The encoders serve to keep the electronics within the system aware of the current orientation of the net charge and to move it to the correct position once a threat has been detected. | |||
[[File:Sensor belt.png|thumb|229x229px|Sensor Belt Top View]] | |||
=== <u>Sensor 'Belt' (C)</u> === | |||
'''<u>Feature Description</u>''' | |||
The sensor ring around the bottom of the turret contains slots for three cameras and microphones. Furthermore, it and the main body interior can be adapted to house further sensor components, such as radar, thermal imaging, RF detectors and other examples. To have a range of sensors fitted to the system plays an important role in ensuring threats of different types can be detected, as well as providing a critical advantage in the effectiveness of the system in various weather and other environmental conditions, such as the density of trees and foliage around the system, or whether there is fog or smoke in the vicinity. | |||
'''<u>Possible Design Choices</u>''' | |||
* Sensors | |||
** Camera | |||
*** Best choice for open areas, as well as any situation where there is a direct line of sight to the drone, and consequently the projectile it drops. Three cameras each covering 120 degrees of space would be combined to provide a view of the entire 360 degree environment. However, as soon as line of sight is broken, the cameras alone are insufficient to detect drones effectively. | |||
** Microphone | |||
*** Fitting the system with microphones compliments the use of cameras effectively. Even when line of sight is broken, the microphone would be able to pick up the sound of the drone flying. This is done by actively performing frequency analysis on the audio recorded by the microphones and checking whether sound frequencies that are typically related to drones flying, including those given off by small motors which typically power the rotor blades of the drone. If these frequencies are significantly present, the microphone interprets the sound as the detection of a drone nearby. A shortcoming of the microphone is that it has a relatively small range, and will work less well with background noise, such as loud explosions or frequent firing. | |||
** Radar | |||
*** While the range of radar sensors is typically very large, there is a significant limitation to this type of detection for small and fast FPV drones. The detectable 'radar cross-section' of the FPV drones is very small, and often radars are designed and calibrated to detect much larger objects, such as fighter jets or naval vessels. This means a specialized radar component would be required, which however would prove to be very expensive and likely difficult to source if a high-end option were necessary. However, some budget options are available and are discussed above this in the wiki. Finally, an additional advantage radar components could possibly provide is the detection of larger and slower surveillance drones flying at a higher altitude. To detect these and warn the soldiers of their presence would allow them to better prepare for incoming surveillance, and consequently also an influx of FPV drone attacks on their position if they are found. | |||
** RF Detector | |||
*** The RF detector is a very useful device which senses the radio frequencies (RF) of drones, drone controllers, and other electronic equipment. Analyzing these frequencies for those typically used by drones to communicate with their pilots, they can quickly be detected if they are in the vicinity. In theory, this could also be used to block that signal and try to prevent the controlling of the drone near the system. | |||
* Sensor implementation | |||
** Sensors implemented directly onto the moving turret | |||
*** This option fixes the sensors relative to the turret and the horizontal orientation of the net charge. This would mean that the turret would rotate until the cameras detect the drone at a predetermined 'angle zero', which aligns with where the turret points. | |||
** Ring/Belt around the central turret | |||
*** With a fixed ring around a moving turret, the cameras (and other sensors) and the turret are not fixed relative to one another. Instead, the sensors are fixed with respect to the rest of the system, and thus, with the ground/environment | |||
'''<u>Selected Design</u>''' | |||
Having the sensors directly implemented into the turret has the significant downside that the cameras would be moving during the process of aiming the turret. This means that rather than the cameras and other sensors being able to provide a constant feedback for a stationary drone, their output would change as the turret rotates to aim towards the drone. Furthermore, the image received by the cameras would likely be blurred during this process of rotation, decreasing the chance of accurate detection and tracking. Therefore, the fixed ring of sensors was chosen. Furthermore, while a radar sensor provides long-range detection benefits, the most critical sensors for FPV detection would consist of the remaining three: camera, microphone, RF detector. These would therefore be the primary choices for a minimum-cost system. | |||
=== <u>Main Body (D)</u> === | |||
'''<u>Feature Description</u>''' | |||
The main body simply consists of a short cylinder with diameter 275mm. This body houses the main electronics of the system and is made of 3D-printed PETG material, with the possibility of adding an additional aluminium frame for additional stability, with the expense of additional weight. The main body features a removable bottom (twist and pull out lid) to provide access to the electronics in case maintenance is necessary. Its size has been limited to allow for easier transport, as well as complex parts having been avoided to allow for easy replacement and quick understanding of the system, helping to avoid the need for additional training. The bottom lid also contains a bracket which allows the upper system to slide and fix onto the tripod rail below in a quick and simple manner. This provides the opportunity for fast attachment and detachment and allow the system to be quickly dismantled if necessary, another important factor discussed in the aforementioned target market interviews. | |||
[[File:Tripod.png|thumb|235x235px|Possible Tripod Choice]] | |||
=== <u>Tripod and Tripod Connector (E and F)</u> === | |||
'''<u>Feature Description</u>''' | |||
The base of the system, as well as a connecting element between the base and the rest of the system, had a number of requirements from target users. Firstly, the system should be easy to dismantle in case positions need to be changed quickly. Secondly, it should be easily transportable, with considerations for not just the size but also the weight of the components. Finally, the base should be able to accommodate various environments, meaning various ground conditions. | |||
'''<u>Possible Design Choices</u>''' | |||
* Base | |||
** Fixed legs (like a chair or table) | |||
*** While fixed legs provide the opportunity to add additional stability to the system through reinforced materials in the legs of the system, these cannot be adapted for different conditions on the ground. For example, if uneven ground is present, the system would not be able to correct for this. Furthermore, solid legs for additional support would significantly increase the weight of the system. Lastly, if the legs are of a fixed shape, they cannot be folded and packed away into a smaller volume, making transportation more difficult. | |||
** Collapsible tripod | |||
*** A collapsible tripod has a number of advantages and disadvantages. A first disadvantage over fixed legs is the reduction in strength, further exacerbated by the collapsible joints of the legs likely weaking the structure relative to their fixed counterparts. However, the adaptability of a tripod to uneven ground and other potentially difficult ground conditions make it very useful. It is also much easier to transport given the possibility to reduce the bounding volume the base of the system would make up during transport. | |||
* Base-to-system connection | |||
** Quick-release mechanism | |||
*** While contributing to a fast disassembly through the rapid process of quick-release mechanisms opening and closing, these mechanisms can often be more complex than necessary, essentially trading a fast mechanism for a slightly more elaborate build, involving multiple moving parts, springs, and so on. This increases the risk of components getting stuck during use, especially in muddy, wet or otherwise interfering environments. | |||
** Rail-slide | |||
*** The rail connection is a simple solution which balances well the requirement for a simple mechanism that minimizes the risk of jamming or malfunction with the requirement for a fast connection and disconnection when necessary. It requires no tools, nor any additional training to use. | |||
** Screws | |||
*** Most stable option but definitely not viable considering the need to be able to rapidly assemble and disassemble the system in active combat zones. Furthermore, this would require users to constantly carry around a screwdriver, and if it gets lost, there would be some significant issues with disassembly. | |||
'''<u>Selected Design</u>''' | |||
The rail-slide mechanism was selected due to its suitable balance between a simple and robust design with a mechanism that will allow for fast deployment and disassembly in situations where every second counts. With this connection mechanism, as well as the use of the highly adaptable tripod design for the base, allow the system to be deployed quickly in a wide range of ground conditions without requiring training or time investments into setting up and dismantling the system. Both design choices also make sure that transportation is as easy as possible. | |||
=== <u>Evaluation and Next Steps</u> === | |||
With an estimated weight of around 27kg (based on materials proposed and their components' volumes, as well as similar, existing product's weights) the weight goal has been reached. However, this comes at the cost of sacrificing more sturdy and robust materials in some places. However, it must be said that the system would fit in a regular backpack, and is quickly disassembled (timed disassembly of 3D-printed model in ~11 seconds comprising removing rail slide, disconnecting sensor ring, turret and main body from one another and placing into a backpack) which completes another requirement. Furthermore, the speed of the system was demonstrated through the use of a prototype using lower-grade motors as a proof-of-concept design, achieving a maximum rotation duration (360 degrees horizontal, 90 degrees vertical) in around 1.5 seconds, providing clear evidence for the validity of such a system, especially when constructed in its full form. | |||
While not all sensors could be integrated, and some other limitations were present such as the weight estimate nearing the goal set by aforementioned requirements, there are now clear steps on how this system design could be improved in the future. Firstly, a fully functional, full-scale model of the system should be printed, assembled and tested using a spring-loaded net charge to perform more in-depth mechanical tests such as the impact of continuous use of the system on various joints and stress concentrations throughout its body, as well as how effective the tripod really is in ground conditions that are not flat and dry, but possibly uneven, muddy, sandy, or otherwise. Furthermore, it should be tested how the system affects the endurance of someone carrying one, or even two, and how realistic it is that it is transported by a single individual over prolonged periods of time, besides estimations of this fact based on the system's weight. Based on the findings of these studies, the mechanical design could be reiterated and further improved in the future. | |||
=='''Prototype'''== | |||
===Components Possibilities=== | |||
====''Radar Component (Millimeter-Wave Radar):'' ==== | |||
'''Component: Infineon BGT60ATR12C''' | |||
*Price: Around €25-30 | |||
*Description: A 60 GHz radar sensor module, compact and designed for small form factors, ideal for detecting the motion of drones. | |||
*Software: Infineon's Radar Development Kit (RDK), a software platform to develop radar signal processing algorithms. | |||
*Website: https://www.infineon.com/cms/en/product/sensor/radar-sensors/radar-sensors-for-automotive/60ghz-radar/bgt60atr24c/ | |||
'''Component: RFBeam K-LC7 Doppler Radar''' | |||
*Price: Around €55 | |||
*Description: A Doppler radar module operating at 24 GHz, designed for short to medium range object detection. It’s used in UAV tracking due to its cost-efficiency and low power consumption. | |||
*Software: Arduino IDE or MATLAB can be used for basic radar signal processing. | |||
*Website: https://rfbeam.ch/product/k-lc7-radar-transceiver/ | |||
====''RF Component:''==== | |||
'''Component: LimeSDR Mini''' | |||
*Price: Not deliverable at the moment | |||
*Description: A compact, full-duplex SDR platform supporting frequencies from 10 MHz to 3.5 GHz, useful for RF-based drone detection. | |||
*Software: LimeSuite, a suite of software for controlling LimeSDR hardware and custom signal processing. | |||
*Website: https://limemicro.com/boards/limesdr-mini/ | |||
'''Component: RTL-SDR V3 (Software-Defined Radio)''' | |||
*Price: Around €30-40 | |||
*Description: An affordable USB-based SDR receiver capable of monitoring a wide frequency range (500 kHz to 1.75 GHz), including popular drone communication bands (2.4 GHz and 5.8 GHz). While not as advanced as higher-end SDRs, it’s widely used in hobbyist RF applications. | |||
*Software: GNU Radio or SDR# (SDRSharp), both of which are open-source platforms for signal demodulation and analysis. | |||
* Website: https://www.rtl-sdr.com/buy-rtl-sdr-dvb-t-dongles/ | |||
====''Acoustic Component:''==== | |||
'''Component: Adafruit I2S MEMS Microphone (SPH0645LM4H-B)''' | |||
*Price: Around €7-10 | |||
*Description: A low-cost MEMS microphone offering high sensitivity, commonly used in sound detection projects for its clarity and noise rejection. | |||
*Software: Arduino IDE or Python with SciPy for sound signature recognition. | |||
*Website: https://www.adafruit.com/product/3421 | |||
'''Component: DFRobot Ferminion MEMS Microphone Module - S15OT421(Breakout)''' | |||
*Price: Around €4 | |||
*Description: A low-cost MEMS microphone offering high sensitivity, commonly used in sound detection projects for its clarity and noise rejection. | |||
*Software: Arduino IDE. | |||
*Website: https://www.dfrobot.com/product-2357.html | |||
====''Vision-Based Component:''==== | |||
'''Component: Arducam 12MP Camera (Visible Light)''' | |||
*Price: Around €60 | |||
*Description: A lightweight, high-resolution camera module, ideal for machine learning and visual detection. | |||
*Software: OpenCV combined with machine learning libraries like TensorFlow or PyTorch for object detection and tracking. | |||
* Website: https://www.arducam.com/product/arducam-12mp-imx477-mini-high-quality-camera-module-for-raspberry-pi/ | |||
'''Component: Raspberry Pi Camera Module v2''' | |||
*Price: Around €15-20 | |||
*Description: A small, lightweight 8MP camera compatible with Raspberry Pi, offering high resolution and ease of use for vision-based drone detection. It can be paired with machine learning algorithms for object detection. | |||
*Software: OpenCV with Python for real-time image processing and detection, or TensorFlow for more advanced machine learning applications. | |||
*Website: https://www.raspberrypi.com/products/camera-module-v2/ | |||
'''Component: ESP32-CAM Module''' | |||
*Price: Around €10-15 | |||
*Description: A highly affordable camera module with a built-in ESP32 chip for Wi-Fi and Bluetooth, ideal for wireless image transmission. It’s a great choice for low-cost vision-based systems. | |||
* Software: Arduino IDE or Python with OpenCV for basic image recognition tasks. | |||
* Website: https://www.tinytronics.nl/en/development-boards/microcontroller-boards/with-wi-fi/esp32-cam-wifi-and-bluetooth-board-with-ov2640-camera | |||
====''Sensor Fusion and Central Control''==== | |||
'''Component: Raspberry Pi 4''' | |||
*Price: Around €35 | |||
*Description: A small, affordable single-board computer that can handle sensor fusion, control systems, and real-time data processing. | |||
*Software: ROS (Robot Operating System) or MQTT for managing communications between the sensors and the central processing unit. | |||
*Website: https://www.raspberrypi.com/products/raspberry-pi-4-model-b/ | |||
'''Component: ESP32 Development Board''' | |||
*Price: Around €10-15 | |||
* Description: A highly versatile, Wi-Fi and Bluetooth-enabled microcontroller. It’s lightweight and low-power, making it ideal for sensor fusion in portable devices. | |||
* Software: Arduino IDE or Micropython, with MQTT for sensor data transmission and real-time control. | |||
*Website: https://www.espressif.com/en/products/devkits | |||
=='''Testing of Prototype components'''== | |||
To verify whether or not we have achieved some of these requirements, we have to devise some test scenarios, which will allow us to quantitatively determine our prototype's accuracy. The notion of accuracy of course may be ambiguous, as we do not have access to test our prototype in a warzone-like environment, and thus accuracy in a lab may not result in accuracy in the trenches. However, we will attempt to simulate such an environment through the use of a projector, as well as through the use of speakers. | |||
==='''Component testing process''' === | |||
In this section we will show and explain how the microphone and the camera are able to detect drones and if these components meet our requirements. | |||
==== Acoustic method ==== | |||
In order to detect drones out of a recorded sound we first of all need a reference sound to later on compare our recorded sound with. Our reference sound is obviously the sound of the drone. This sound file can be plotted as an audio plot with on the x-axis time and on the y-axis shows the signal’s strength in arbitrary units (Fig 15.1). After using the Fourier transform and normalising this, a plot in the frequency domain can be obtained. In this plot the frequency is on the x-axis and normalised strength on the y-axis (Fig 15.2). The same can be done for the recorded sound which we want to test. In the analysis of the sounds the frequency domain will be reduced to the domain 500 Hz-3000 Hz because our testing drones and the used background noises fall in this domain. The last thing left to do is search in the recorded normalised frequency domain plot (Fig 15.3) for the ‘global shape’ of the normalised frequency domain plot (Fig 15.2) of the reference sound. This part can be done in multiple strategies. I will first explain how this method works and then explain why we choose this method.[[File:Total image.png|right|639x639px]][[File:Equations1.png|right|524x524px]]Our goal is to have a set with different frequencies which represent the ‘fingerprint’ of the normalised frequency domain plot of the drone. We obtain this by setting a threshold value (λ) equal to 1. By lowering λ with small values and marking the frequency peaks we encounter with red dots, we can obtain a set with a total of ‘n’ ‘peak’ frequencies representing this specific sound. The specific value of ‘n’ is with trial and error set to the optimal value of 100 for the reference sound and ‘n’ is set to 250 for the recorded sound. This is plotted in (Fig 15.4) for the drone sound and (Fig 15.5) for the recorded sound. Now two sets with frequencies are obtained representing both their own sounds and the only thing left for us to do is to compare these sets. This is done by (Eq 15.1). So for each frequency of the drone reference sound we will check if there is frequency in the recorded set such that the absolute value of the subtraction of the two is smaller than the tolerance. The tolerance is by trial and error set to 2 Hz. This is needed because the Doppler effect might play a role if the drone moves or because the microphone might pick up frequencies slightly different. The number of frequencies that meet the requirement of (Eq 15.1) (m), can thus lay between 0 and 100, where m=100 is perfect recognition of the drone. We have chosen this method because it is an intuitive concept and relatively easy to implement. In addition, the program is computationally very light and is thus quickly analysable. This number m needs to be transformed to a percentage which represents the change of a drone being present. The most simple conversion would be a linear relationship. However, after conducting a few experiments with varying distances with and without background noise the relationship did not seem to be linear. Until roughly m=20 we know with great certainty that there is not a drone, because in regular background noise often a few peaks will correspond. In the same way, when m=90 we know with relatively good certainty that there is a drone present but the experiments concluded that, especially with background noise, often not all frequencies from the drone set are reflected in the recorded sound frequency set. Therefore the domain of u The ‘Logistic Formula’ (Eq 15.2) seems to be able to fit this quite well. In the program we have taken a=0.16 and b=55. The value of ‘b’ reflects the number of corresponding peaks at which the certainty of a drone being present is 50%. The number 55 is chosen because this is the middle of the uncertain domain. The values of ‘a’ and ‘b’ are unfortunately not fully validated, as we did not have the proper equipment and the proper amount of time to conduct a thorough experiment. | |||
==== Visual method ==== | |||
Detecting drones accurately is crucial for various applications, including security and surveillance. While acoustic detection methods provide reliable results by identifying drone sounds in noisy environments, they can be limited in range or accuracy under certain conditions,. To complement acoustic detection, we implemented a visual detection system using the YOLOv8 object detection model, a state of the art computer vision model, and trained it using an online dataset of drone images to then detect drones in real-time. | |||
[[File:Visual based detection examples.png|thumb|450x450px|Examples of a drone being detected by the YOLOv8 model.]] | |||
The first step was data preparation, which required gathering a comprehensive dataset of drone images captured under varied conditions, including different backgrounds, distances, and angles. Each image in the dataset was manually labeled by drawing bounding boxes around drones, precisely marking their position within each image. Thankfully, we found a drone dataset that already included an XML file for each photo, bounding the drone in that photo. A training configuration file was then created in the YOLO format, specifying the paths to the dataset images and labels, as well as information about the object class—drones in this case. With the dataset ready, I moved to model initialization. To expedite the training process and improve accuracy, I started with the YOLOv8x model initialized with pre-trained weights from the COCO (Common Objects in Context) dataset, a large object detection dataset that was already pretrained by the YOLOv8x model. These weights provided a strong foundation, allowing the model to begin training with general object recognition capabilities, which were subsequently refined to specialize in detecting drones. The pre-trained model's understanding of common object features (such as edges, shapes, and textures) facilitated faster learning and improved accuracy when adapted to the task of drone detection. The training process involved configuring the model with optimized parameters. A moderate image size of 640x640 pixels was selected to balance detection accuracy with training speed. Training was conducted for approximately 32 epochs, which allowed the model sufficient time to learn the nuances of drone features and improve detection accuracy. Throughout training, the model minimized a loss function comprising three main components: bounding box accuracy, object presence probability, and class confidence. By refining these aspects, the model learned to distinguish drones from other objects and background noise (e.g., leaves). Upon completion, the model’s best-performing checkpoint was saved locally for later use. Before we moved on to real-time detection of drones, we decided to test our model on some videos we found on the internet, seeking to find any flaws to the model, and trying to see if more training needed to be done. From our tests, we noticed that although the model detect the drone when it was close to the camera or when it was alone in the sky, it really struggled with a lot of background noise, especially with trees and its leaves. Therefore, we decided to retrain the model for 100 more epochs, and also increased the quality of the images that the model was training on to 1024x1024 pixels. This required a lot of computational power, so we paid for the Google Colab Pro model to train our model quicker. After training, the YOLOv8 model was deployed for real-time detection. The model was applied to video feeds, where it analyzed each frame individually, drawing bounding boxes around detected drones and accurately distinguishing them from other elements in the background. However, due to our lack of GPU in the laptop we were testing it on, our frame rate was significantly reduced when trying to run the trained YOLOv8x model, so we had to accommodate for this by running the model every 3 frames. This capability made the model well-suited for real-time applications, such as live drone feed monitoring or video surveillance. The resulting real-time drone detection model achieved high accuracy, especially when the entire outline of the drone was visible, even if it was surrounded by other objects, like trees, houses or clouds. Moreover, we were able to consistently detect a drone when it was 30 meters away in the sky. However, this distance testing failed when the drone was camouflaged by buildings or by trees, but we made up for it using the acoustic methods as described above. | |||
==== Sensor Fusion methodology ==== | |||
From literary research we found five sensor fusion algorithms that could be relevant to our detection system. These being: | |||
# Extended Kalman Filter | |||
# Convolutional Neural Networks | |||
# Bayesian Networks | |||
# Decision-Level Fusion with Voting Mechanisms | |||
# Deep Reinforcement Learning | |||
The first three listed here we will not be able to apply properly with only a vision-based and an acoustic sensing component. The fifth algorithms will not be possible with our recourses and time limit of the project. Therefore we decided to take a deeper look into Decision-Level Fusion with Voting Mechanisms, and how we could apply this to our tests. | |||
We looked into three different ways of applying this method. The first one being the following, we state there is a drone near if one of the two sensors claimed to have detected a drone. The second one being that we state there is a drone near if both sensors claimed to have detected a drone. Lastly, if one of the sensor claimed there to be a drone, we lowered the threshold for the other sensor, and we required double confirmation from the sensors. | |||
With the first method we found there to be a lot of true negative results, implying that the system would indicate there is a drone near, while this is not the case. This was because our vision-based component was not very accurate. | |||
For the second method we found that it worked better than the first method, but only in specific scenarios. If the drone was far away, then the camera could pick it up, but the microphone simply did not have the range. | |||
The third method gives a balance between these. It does not give false positives as it still needs confirmation from all sensors, but also not as many true negatives as it does not only rely on a single sensor entirely. | |||
However, this still would result in some false positive at times, consider the situation with the drone far away, therefore to adapt it in a better way for our system, we should analyse from the sensors whether another sensor is applicable in the situation. Therefore, if for example the visual-based component detects a drone, and it detects that the drone is a certain amount of meters away, then the acoustic component would be considered invaluable and taken out of the decision making. This is the case because if a drone were to be far away we would know from tested specifications that the microphone can actually detect a drone only up to a certain range, therefore if the camera were to pick up a drone far away, we know that the microphone will never detect this drone, and therefore the component data is invaluable and will not be considered. To determine this exact distance depends on the exact components that would be used, and it would require more testing to determine this accurately. This would work best when actually more than two components would be used, because otherwise in some cases the third method would become the first method which results in a lot of true negatives, which we would like to avoid. | |||
== '''Future Work''' == | |||
Expanding this project could focus on several key areas that enhance functionality, and reliability. Someone that were to expand this project could look into the following areas. | |||
'''Detection of Projectiles and Drone/Projectile trajectory (speed & direction)''' | |||
We have done research into tracking the path of a drone, and have found existing project that tracks the motion of a drone using a YOLOv8 model, so extending this project to that should not take too much effort. This should also be followed by an experimental analysis of the location and distance of the drone in real life compared to its size and location in the video feed, such that the detection and interception methods can be merged. | |||
'''Improvement of Detection Accuracy through Improved Sensor Fusion''' | |||
We have done research into the different components could be added to the system in regard to drone detection. We tested with two of these but not all, therefore future work could test the other components aswell. Additionally, more testing could be done on sensor fusion when all components are into play. | |||
'''Testing Interception Method''' | |||
We have done research into different interception methods, and also how a net could be used to intercept projectiles. Future work could test intercepting objects using a net, determine whether our calculations are correct, how to make a device that holds a net, how to shoot it using the system and how to reload the net into the system. | |||
'''Field Testing in Diverse Environments''' | |||
It is important to analyse whether our system would perform as we expect it to do in different environment. Therefore future work could be regarding testing how dynamic the system is and how to improve it to be better at adapting to different environment. | |||
=='''Literary Research''' == | |||
<u>'''Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not'''</u><ref>Willy, Enock, Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not (NOVEMBER 16, 2020). Available at SSRN: https://ssrn.com/abstract=3867978 or http://dx.doi.org/10.2139/ssrn.3867978</ref> | <u>'''Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not'''</u><ref>Willy, Enock, Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not (NOVEMBER 16, 2020). Available at SSRN: https://ssrn.com/abstract=3867978 or http://dx.doi.org/10.2139/ssrn.3867978</ref> | ||
Line 106: | Line 872: | ||
A significant challenge with autonomous systems is ensuring compliance with international laws, particularly IHL. The paper delves into how such systems can be designed to adhere to humanitarian law and discusses critical and optional features such as the capacity to identify combatants and non-combatants effectively. This is directly relevant to ensuring our system's utility in operational contexts while adhering to ethical norms. | A significant challenge with autonomous systems is ensuring compliance with international laws, particularly IHL. The paper delves into how such systems can be designed to adhere to humanitarian law and discusses critical and optional features such as the capacity to identify combatants and non-combatants effectively. This is directly relevant to ensuring our system's utility in operational contexts while adhering to ethical norms. | ||
'''<u>Artificial Intelligence Applied to Drone Control: A State of the Art</u>'''<ref>Caballero-Martin D, Lopez-Guede JM, Estevez J, Graña M. Artificial Intelligence Applied to Drone Control: A State of the Art. ''Drones''. 2024; 8(7):296. https://doi.org/10.3390/drones8070296</ref> | '''<u>Artificial Intelligence Applied to Drone Control: A State of the Art</u>'''<ref name=":3">Caballero-Martin D, Lopez-Guede JM, Estevez J, Graña M. Artificial Intelligence Applied to Drone Control: A State of the Art. ''Drones''. 2024; 8(7):296. https://doi.org/10.3390/drones8070296</ref> | ||
This paper explores the integration of AI in drone systems, focusing on enhancing autonomous behaviors such as navigation, decision-making, and failure prediction. AI techniques like deep learning and reinforcement learning are used to optimize trajectory, improve real-time decision-making, and boost the efficiency of autonomous drones in dynamic environments. | This paper explores the integration of AI in drone systems, focusing on enhancing autonomous behaviors such as navigation, decision-making, and failure prediction. AI techniques like deep learning and reinforcement learning are used to optimize trajectory, improve real-time decision-making, and boost the efficiency of autonomous drones in dynamic environments. | ||
'''<u>Drone Detection and Defense Systems: Survey and Solutions</u>'''<ref>Chiper F-L, Martian A, Vladeanu C, Marghescu I, Craciunescu R, Fratu O. Drone Detection and Defense Systems: Survey and a Software-Defined Radio-Based Solution. ''Sensors''. 2022; 22(4):1453. https://doi.org/10.3390/s22041453</ref> | '''<u>Drone Detection and Defense Systems: Survey and Solutions</u>'''<ref name=":0">Chiper F-L, Martian A, Vladeanu C, Marghescu I, Craciunescu R, Fratu O. Drone Detection and Defense Systems: Survey and a Software-Defined Radio-Based Solution. ''Sensors''. 2022; 22(4):1453. https://doi.org/10.3390/s22041453</ref> | ||
This paper provides a comprehensive survey of existing drone detection and defense systems, exploring various sensor modalities like radio frequency (RF), radar, and optical methods. The authors propose a solution called DronEnd, which integrates detection, localization, and annihilation functions using Software-Defined Radio (SDR) platforms. The system highlights real-time identification and jamming capabilities, critical for intercepting drones with minimal collateral effects. | This paper provides a comprehensive survey of existing drone detection and defense systems, exploring various sensor modalities like radio frequency (RF), radar, and optical methods. The authors propose a solution called DronEnd, which integrates detection, localization, and annihilation functions using Software-Defined Radio (SDR) platforms. The system highlights real-time identification and jamming capabilities, critical for intercepting drones with minimal collateral effects. | ||
'''<u>Advances and Challenges in Drone Detection and Classification</u>'''<ref>Seidaliyeva U, Ilipbayeva L, Taissariyeva K, Smailov N, Matson ET. Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. ''Sensors''. 2024; 24(1):125. https://doi.org/10.3390/s24010125</ref> | '''<u>Advances and Challenges in Drone Detection and Classification</u>'''<ref name=":2">Seidaliyeva U, Ilipbayeva L, Taissariyeva K, Smailov N, Matson ET. Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. ''Sensors''. 2024; 24(1):125. https://doi.org/10.3390/s24010125</ref> | ||
This state-of-the-art review highlights the latest advancements in drone detection techniques, covering RF analysis, radar, acoustic, and vision-based systems. It emphasizes the importance of sensor fusion to improve detection accuracy and effectiveness. | This state-of-the-art review highlights the latest advancements in drone detection techniques, covering RF analysis, radar, acoustic, and vision-based systems. It emphasizes the importance of sensor fusion to improve detection accuracy and effectiveness. | ||
Line 126: | Line 892: | ||
This article discusses the increasing threat posed by small unmanned aerial systems (sUAS) to military forces, particularly the U.S. Department of Defense (DoD). It highlights how enemies are using these drones for surveillance and delivery of explosives. | This article discusses the increasing threat posed by small unmanned aerial systems (sUAS) to military forces, particularly the U.S. Department of Defense (DoD). It highlights how enemies are using these drones for surveillance and delivery of explosives. | ||
'''<u>The Rise of Radar-Based UAV Detection For Military: A Game-Changer in Modern Warfare</u>'''<ref>''The rise of Radar-Based UAV Detection for Military: A Game-Changer in Modern Warfare''. (2024, June 11). Spotter Global. <nowiki>https://www.spotterglobal.com/blog/spotter-blog-3/the-rise-of-radar-based-uav-detection-for-military-a-game-changer-in-modern-warfare-8</nowiki></ref> | '''<u>The Rise of Radar-Based UAV Detection For Military: A Game-Changer in Modern Warfare</u>'''<ref name=":1">''The rise of Radar-Based UAV Detection for Military: A Game-Changer in Modern Warfare''. (2024, June 11). Spotter Global. <nowiki>https://www.spotterglobal.com/blog/spotter-blog-3/the-rise-of-radar-based-uav-detection-for-military-a-game-changer-in-modern-warfare-8</nowiki></ref> | ||
This article discusses how radar-based unmanned aerial vehicle (UAV) detection is transforming military operations. SpotterRF’s systems use advanced radar technology to detect drones in all conditions, including darkness or bad weather. By integrating AI, these systems can distinguish between drones and non-threats like birds, improving accuracy and reducing false positives. | This article discusses how radar-based unmanned aerial vehicle (UAV) detection is transforming military operations. SpotterRF’s systems use advanced radar technology to detect drones in all conditions, including darkness or bad weather. By integrating AI, these systems can distinguish between drones and non-threats like birds, improving accuracy and reducing false positives. | ||
Line 391: | Line 1,157: | ||
This paper gives clear insight in different scenarios where drones can be a threat. For example on large crowds but also in warfare. This paper does however not give a concrete solution. | This paper gives clear insight in different scenarios where drones can be a threat. For example on large crowds but also in warfare. This paper does however not give a concrete solution. | ||
<u><br /> | |||
'''An anti-drone device based on capture technology'''<ref>An anti-drone device based on capture technology | '''An anti-drone device based on capture technology'''<ref>An anti-drone device based on capture technology | ||
Yingzi Chen, Zhiqing Li, Longchuan Li, Shugen Ma, Fuchun Zhang, Chao Fan, <nowiki>https://doi.org/10.1016/j.birob.2022.100060</nowiki> | Yingzi Chen, Zhiqing Li, Longchuan Li, Shugen Ma, Fuchun Zhang, Chao Fan, <nowiki>https://doi.org/10.1016/j.birob.2022.100060</nowiki> | ||
<nowiki>https://www.sciencedirect.com/science/article/pii/S2667379722000237</nowiki> </ref> | <nowiki>https://www.sciencedirect.com/science/article/pii/S2667379722000237</nowiki> </ref></u> | ||
This paper explores the capabilities of capturing a drone with a net. It also addresses some other forms of anti drone devices, such as lasers, hijacking, rf jamming… | This paper explores the capabilities of capturing a drone with a net. It also addresses some other forms of anti drone devices, such as lasers, hijacking, rf jamming… | ||
Line 402: | Line 1,168: | ||
For the rest is this paper very technical in the net captering. | For the rest is this paper very technical in the net captering. | ||
== '''Logbook''' == | |||
'''<u>Four innovative drone interceptors.<ref>Four innovative drone interceptors. Svetoslav ZabunovB, Garo Mardirossian,<nowiki>https://doi.org/10.7546/CRABS.2024.02.09</nowiki> </ref></u>''' | |||
This paper states 5 different ways of detecting drones. Acoustic detection and tracking with microphones positioned in a particular grid, video detection by cameras, thermal detection, radar detection and tracking and as last the detection through radio emissions from the drone. Because we want to also be able to catch ‘off-the-shelf’ drones we have to investigate which ones are appropriate. For taking down the drone they give 6 options: missile launch, radio jamming, net throwers, machine guns, lasers, drone interceptors. The 4 drone interceptors they introduce are for us a bit above budget, as they are real drones with various kinds of generators to take down a drone (for example with a high electric pulse), but we could still look into this. | |||
'''<u>Comparative Analysis of ROS-Unity3D and ROS-Gazebo for Mobile Ground Robot Simulation</u>'''<ref name=":4">Platt, J., Ricks, K. Comparative Analysis of ROS-Unity3D and ROS-Gazebo for Mobile Ground Robot Simulation. ''J Intell Robot Syst'' '''106''', 80 (2022). https://doi.org/10.1007/s10846-022-01766-2</ref> | |||
This paper examines the use of Unity3D with ROS versus the more traditional ROS-Gazebo for simulating autonomous robots. It compares their architectures, performance in environment creation, resource usage, and accuracy, finding that ROS-Unity3D is better for larger environments and visual simulation, while ROS-Gazebo offers more sensor plugins and is more resource-efficient for small environments. | |||
=='''Logbook'''== | |||
{| class="wikitable" | {| class="wikitable" | ||
|+Week 1 | |+Week 1 | ||
Line 414: | Line 1,190: | ||
|- | |- | ||
|Robert Arnhold | |Robert Arnhold | ||
|16h | | 16h | ||
|Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [11], [12], [13], [14], [15] (10h), Summarized and described key takeaways for papers/patents [11], [12], [13], [14], [15] (2h) | |Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [11], [12], [13], [14], [15] (10h), Summarized and described key takeaways for papers/patents [11], [12], [13], [14], [15] (2h) | ||
|- | |- | ||
|Tim Damen | |Tim Damen | ||
|16h | |16h | ||
|Attended lecture (2h), Attended meeting with group (2h), Analysed papers [12], [13], [14], [15], [16] (10h), Summarized and described key takeaways for papers [12], [13], [14], [15], [16] (2h) | | Attended lecture (2h), Attended meeting with group (2h), Analysed papers [12], [13], [14], [15], [16] (10h), Summarized and described key takeaways for papers [12], [13], [14], [15], [16] (2h) | ||
|- | |- | ||
|Ruben Otter | |Ruben Otter | ||
| | |17h | ||
|Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [1], [2], [3], [4], [5] (10h), Summarized and described key takeaways for papers/patents [1], [2], [3], [4], [5] (2h), Set up Wiki page (1h) | |Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [1], [2], [3], [4], [5] (10h), Summarized and described key takeaways for papers/patents [1], [2], [3], [4], [5] (2h), Set up Wiki page (1h) | ||
|- | |||
| Raul Sanchez Flores | |||
|16h | |||
|Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [6], [7], [8], [9], [10] (10h), Summarized and described key takeaways for papers/patents [6], [7], [8], [9], [10] (2h), | |||
|} | |||
{| class="wikitable" | |||
|+Week 2 | |||
!Name | |||
!Total | |||
!Break-down | |||
|- | |||
|Max van Aken | |||
|13h | |||
|Attended lecture (30min), Attended meeting with group (1h), Research kinds of situations of device (2h), wrote about situations (1,5h), research ethics (6h), write ethics (2h) | |||
|- | |||
|Robert Arnhold | |||
|6.5h | |||
|Attended lecture (30min), Attended meeting with group (1h), Worked on interview questions (1h), Organizing introductory interview (2h), Preparing interviews for next weeks(2h) | |||
|- | |||
|Tim Damen | |||
|13.5h | |||
|Attended lecture (30min), Attended meeting with group (1h), Risk evaluation (2h), Important features (1h), Research on ethics of deflection (8h), Writing part about deflection (1h) | |||
|- | |||
|Ruben Otter | |||
|14.5h | |||
|Attended lecture (30min), Attended meeting with group (1h), Analysed papers [2], [3], [4], [5] for research in drone detection (6h), Wrote about drone detection and its relation to our system using papers [2], [3], [4], [5] (4h), Analysed and summarized paper [6] (2h), Wrote about usage of simulation and its software (1h) | |||
|- | |- | ||
|Raul Sanchez Flores | |Raul Sanchez Flores | ||
|14.5h | |||
|Attended lecture (30min), Attended meeting with group (1h) Researched and analysed papers about approaches to drone interception (4h) Researched and analysed papers about drone interception (6h), Evaluated different existing drone interception (3h) | |||
|} | |||
{| class="wikitable" | |||
|+Week 3 | |||
!Name | |||
!Total | |||
!Break-down | |||
|- | |||
|Max van Aken | |||
|6h | |||
|Attended lecture (30min), Meeting with group (1.5h), researched ethics (2h), rewriting ethics (2h) | |||
|- | |||
|Robert Arnhold | |||
|13h | |||
|Attended lecture (30min), Meeting with group (1.5h), Completed interview (4h), planning next interview (1h), conceptualizing mechanical-side (3h), state-of-the-art research and review of previously prepared sources (3h) | |||
|- | |||
|Tim damen | |||
|10h | |||
|Attended lecture (30min), Meeting with group (1.5h), Analysed papers [6], [7], [8], [9] [10], [11] (6h), Rewrote ethics of deflection based on autonomous cars (2h) | |||
|- | |||
|Ruben Otter | |||
|10h | |||
|Attended lecture (30min), Meeting with group (1.5h), Research into possible drone detection components (6h), Created list of possible prototype components (2h) | |||
|- | |||
|Raul Sanchez | |||
Flores | |||
|9.5h | |||
|Attended lecture (30min), Meeting with group (1.5h) Make interception methods comparison more in-depth (5h) Created Pugh Matrix to compare different interception methods (2h) Email Ruud about components we need (30min) | |||
|} | |||
'''Week 4''' | |||
{| class="wikitable" | |||
!Name | |||
!Total | |||
!Break-down | |||
|- | |||
|Max van Aken | |||
|11h | |||
|Attended lecture (30min), Meeting with group (1.5h), researched default drone specifications (2h) done calculation on mass prediction (6h), mathematica calculations (1h) | |||
|- | |||
|Robert Arnhold | |||
|13h | |||
|Attended lecture (30min), Meeting with group (1.5h), investigating mechanical design concepts and documenting process (7h), summarizing discussion and learnings (2h, not yet finished), contacting new interviewees (2h) | |||
|- | |||
|Tim damen | |||
|17h | |||
|Attended lecture (30min), Meeting with group (1.5h), Analysed 4 papers (4h), Research on how to catch a projectile with a net (6h), Written text on how to catch a projectile with a net (2h), Worked on a Mathematica script for calculations to catch a projectile (3h) | |||
|- | |||
|Ruben Otter | |||
|15h | |||
|Attended lecture (30min), Meeting with group (1.5h), Attended meeting with Ruud (1h), Setup Raspberry Pi (2h), Research into audio sensors (2h), Started coding and connection audio sensor with Raspberry Pi(8h) | |||
|- | |||
|Raul Sanchez | |||
Flores | |||
|12h | |||
|Attended lecture (30min), Meeting with group (1.5h), Attended meeting with Ruud (1h), Write Specifications for our design (5h) Format Table of Requirements (1h) Begin writing testing methods for each specifications (3h) | |||
|} | |||
'''Week 5''' | |||
{| class="wikitable" | |||
|- | |||
|Max van Aken | |||
| 11h | |||
|Attended lecture (30min), Meeting with group (1.5h), spring calculations (4h), worked on microphone code (5h) | |||
|- | |||
|Robert Arnhold | |||
|11h | |||
|Attended lecture (30min), Meeting with group (1.5h), discussing schedule with potential new interviewee, developing mechanical design (2h), summarizing focus and key learnings from previous interview (2h), investigating mechanical components, materials, structures, and other design features (5h) | |||
|- | |||
|Tim damen | |||
| 17h | |||
|Attended lecture (30min), Meeting with group (1.5h), Finalize Mathematica to catch projectile in 2D (1h), Research on catching projectile in 3D (4h), Work on Mathematica to catch projectile in 3D (6h), Analyses on accuracy of 2D model (2h), Writing the text for on the wiki (1h), General research to write text and do calculations (1h) | |||
|- | |||
|Ruben Otter | |||
|11h | |||
|Attended lecture (30min), Meeting with group (1.5h), Continued integrating microphone sensor with Raspberry Pi (9h) | |||
|- | |||
|Raul Sanchez | |||
Flores | |||
|17h | |||
| Attended lecture (30min), Meeting with group (1.5h) Finished writing testing methods (5h) Research into YOLOv8 and watching a video of its usage (4h) Find appropriate training and validity drone datasets (3h) Begin implementing python code to train drone datasets using YOLOv8 (3h) | |||
|} | |||
'''Week 6''' | |||
{| class="wikitable" | |||
|- | |||
|Max van Aken | |||
| 11h | |||
|Attended lecture (30min), Meeting with group (1.5h), begin with presentation/text (6h), further developed microphone code (3h) | |||
|- | |||
|Robert Arnhold | |||
|10h | |||
|Attended lecture (30min), Meeting with group (1.5h), performing second interview with volunteer in Ukraine (2h), collecting and structuring notes for review (2h), continuing mechanical design research and formalizing CAD design (4h) | |||
|- | |||
| Tim damen | |||
|8h | |||
|Attended lecture (30min), Meeting with group (1.5h), Worked on creating the 3D model in Mathematica (5h), written part about the 3D model (1h) | |||
|- | |||
|Ruben Otter | |||
|11h | |||
| Attended lecture (30min), Meeting with group (1.5h), Dowloaded and learned MatLab syntax (3h), Research into different types of drones (2h), Finding drone sound files for specific drones (1h), Coded in MatLab to analyse frequency of the sample drone sound (3h) | |||
|- | |||
|Raul Sanchez | |||
Flores | |||
|10.5h | |||
|Attended lecture (30min), Meeting with group (1.5h) Train YOLOv8 model on a drone dataset (1.5h) Write email to Ruud to borrow drone (0.5h) Meeting with Ruud to see if we can borrow drone (0.5h) Write emails to Duarte Guerreiro Tomé Antunes to borrow drone (0.5h) Look for other places to borrow a drone (1.5h) Test YOLOv8 model on drone videos in the internet(4h) | |||
|} | |||
'''Week 7''' | |||
{| class="wikitable" | |||
|- | |||
|Max van Aken | |||
|8h | |||
|Attended lecture (30min), Meeting with group (1.5h), writing text presentation (2h), testing code to convert output to chance (4h) | |||
|- | |||
|Robert Arnhold | |||
|12h | |||
|Attended lecture (30min), Meeting with group (1.5h), summarizing secondary interview notes into key takeaways and confirmed specifications (3h), completing CAD design for given specifications (2h), 3D printing main body of system (4h) and constructing body of prototype (1h) | |||
|- | |||
|Tim damen | |||
|12.5h | |||
|Attended lecture (30min), Meeting with group (1.5h), Research on type of projectiles, dimensions, effects of air resistance on these projectiles and on net (4h), Adapt model with wind (0.5h), Specified assumptions made based on research done (2h), Tested accuracy of 3D model (2h), Written text about accuracy of 3D model (1h), Cleared up some parts on the wiki page (1h) | |||
|- | |||
|Ruben Otter | |||
|16h | |16h | ||
|Attended lecture (2h), Attended | |Attended lecture (30min), Meeting with group (1.5h), Continue coding in MatLab to analyse the frequency of the sample drone sound and comparing these with other sound files (4h), Research into Sensor Fusion (3h), Apply Sensor Fusion methodology on the test results(2h), Create presentation(2h), Prepare for presentation(3h) | ||
|- | |||
|Raul Sanchez | |||
Flores | |||
|14.5h | |||
|Attended lecture (30min), Meeting with group (1.5h) Implement real-time drone detection using camera and YOLOv8 (4h) Re-test YOLOv8 model using a larger training dataset with higher resolution images (3h) Fly drone outside, record it, and test the real-time detection using camera (1.5h) Test new trained YOLOv8 model on recorded videos, and cut them for the presentation (4h) | |||
|} | |||
'''Week 8''' | |||
{| class="wikitable" | |||
|- | |||
|Max van Aken | |||
|7h | |||
|Attended lecture (30min), Meeting with group (1.5h), writing/reading wiki (5h) | |||
|- | |||
|Robert Arnhold | |||
|12h | |||
|Attended lecture (30min), Meeting with group (1.5h), Completed mechanical design (5h), Formalized interview notes for wiki (2h), Wrote up wiki sections (3h) | |||
|- | |||
|Tim damen | |||
|7h | |||
|Attended lecture (30min), Meeting with group (1.5h), Change order on wiki and improve some small parts (1h), Finished up last pieces of my text (1h), Improved the photos (1h), Final review wiki (2h) | |||
|- | |||
|Ruben Otter | |||
|10.5h | |||
|Attended lecture (30min), Meeting with group (1.5h), Prepare for presentation(1h), Present the presentation(30min), Some additional research into Sensor Fusion(2h), Write part about sensor fusion from a theoretical point of view(2h), Write part about how we applied certain sensor fusion methodology in our testing(2h), Write future work section(1h) | |||
|- | |||
|Raul Sanchez | |||
Flores | |||
|12h | |||
|Attended lecture (30min), Meeting with group (1.5h) Add specification justification section to the wiki (4h) Write about the visual-based drone detection in the wiki (4h) Add citations to my sections in the wiki (2h) | |||
|} | |} | ||
== '''References''' == | =='''References''' == | ||
<references /> | <references /> |
Latest revision as of 19:53, 28 October 2024
Group Members
Name | Student ID | Department |
---|---|---|
Max van Aken | 1859455 | Applied Physics |
Robert Arnhold | 1847848 | Mechanical Engineering |
Tim Damen | 1874810 | Applied Physics |
Ruben Otter | 1810243 | Computer Science |
Raul Sanchez Flores | 1844512 | Computer Science / Applied Mathematics |
Problem Statement
In modern warfare drones play a huge role. Drones are relatively cheap to make and deal a lot of harm, while not a lot is done against them. There exists large anti-drone systems which protect important areas from being attacked by drones. Individuals, which are at the front line, are not protected by such anti-drone systems as they are expensive and too large to carry around. This makes individuals at the front lines vulnerable to drone attacks. We aim to show that an anti-drone system can be made cheap, lightweight and portable to protect these vulnerable individuals.
Objectives
To show that an anti-drone system can be made cheap, lightweight and portable we do the following things:
- Explore and determine ethical implications of the portable device.
- Determine how drones and projectiles can be detected.
- Determine how a drone or projectile can be intercepted and/or redirected.
- Build a prototype of this portable device.
- Create a model for the interception
- Prove the system’s utility.
Planning
Within the upcoming 8 weeks we will be working on this project. The table below shows when we aim to finish the different tasks within the 8 weeks of the project.
Week | Task |
---|---|
1 | Initial planning and setting up the project. |
2 | Literary research. |
3 | Create ethical framework. |
Conduct an interview with an expert to confirm and construct the use cases. | |
Start constructing prototype and software. | |
Determine potential problems. | |
4 | Continue constructing prototype and software |
5 | Finish prototype and software |
6 | Testing prototype to verify its effectiveness and use cases. |
Evaluate testing results and make final changes. | |
7 | Create final presentation. |
8 | Finish Wiki page. |
Risk Evaluation
A risk evaluation matrix can be used to determine where the risks are within our project. This is based on two factors: the consequence if a task is not fulfilled and the likelihood that this happens. Both of these factors are rated on a scale from 1 to 5 and using the matrix below a final risk is determined. This can be a low, medium, high or critical risk. Knowing the risks beforehand gives the ability to prevent failures from occurring as it is known where special attention is required.
Task | Consequence (1-5) | Likelihood (1-5) | Risk |
Collecting 25 articles for the SoTA | 1 | 1 | Low |
Interviewing front line soldier | 1 | 2 | Low |
Finding features for our system | 4 | 1 | Medium |
Making a prototype | 3 | 3 | Medium |
Make the wiki | 5 | 1 | Medium |
Finding a detection method for drones and projectiles | 4 | 1 | Medium |
Determine (ethical) method to intercept or redirect drones and projectiles | 5 | 1 | Medium |
Prove the systems utility | 5 | 2 | High |
Target Market Interviews
Interview 1: Introductory interview with F.W., Australian Military Officer Cadet (military as a potential user)
Q1: What types of anti-drone (/projectile protection) systems are currently used in the military?
- A: The systems currently in use include the Trophy system from Israel, capable of intercepting anti-tank missiles and potentially intercepting dropped payload explosives or drones. Another similar system is the Iron Fist, also from Israel. Additionally, Anduril's Lattice system offers detection and identification capabilities within a specified operational area, capable of detecting all possible threats and, if necessary, intercepting them.
Q2: What are key features that make an anti-drone system effective in the field?
- A: Effective systems are often deployed in open terrains like fields and roads, as drones have difficulty navigating dense forests. An effective anti-drone system could include a training or tactics manual to optimize use in these environments. Key features also include a comprehensive warning system that can alert troops on the ground to incoming drones, allowing them to take cover. Systems should not focus solely on interception but also on early detection.
Q3: What are the most common types of drones that these systems are designed to counter?
- A: The systems are primarily designed to counter consumer drones and homemade FPV drones, which are known for their speed and accuracy. These drones are incredibly effective and easy to construct.
Q4: What are the most common types of drone interception that these systems employ?
- A: The common types of interception include non-kinetic methods such as RF and optical laser jamming, which have a low cost-to-intercept. Kinetic methods include shooting with regular rounds, using net charges, or employing blast interception techniques such as those used in the Iron Fist system. High-cost methods like air-burst ammunition are also utilized due to their high interception likelihood.
Q5: Are there any specific examples of successful or failed anti-drone operations you could share?
- A: No specific examples were shared during the interview.
Q6: How are drones/signals determined to be friendly or not?
- A: The identification process was not detailed in the interview.
Q7: What are the most significant limitations of current anti-drone systems?
- A: Significant limitations include the high cost, which makes these systems unaffordable for individual soldiers or small groups. Most systems are also complex and require vehicle mounts for transportation, making them less suitable for quick or discreet maneuvers.
Q8: Are there any specific environments where anti-drone systems struggle to perform well?
- A: These systems typically perform well in large open areas but may struggle in environments with dense vegetation, which can offer natural cover for troops but limit the functionality of the systems.
Q9: Can you give a rough idea of the costs involved in deploying these systems?
- A: The costs vary widely, and detailed pricing information is generally available on a need-to-know basis, making it difficult to provide specific figures without direct inquiries to manufacturers.
Q10: Which ethical concerns may be associated with the use of anti-drone systems, particularly regarding urban, civilian areas?
- A: Ethical concerns are significant, especially regarding the deployment in civilian areas. The military undergoes extensive ethical training, and all new systems are evaluated for their ethical implications before deployment.
Q11: What improvements do you think are necessary to make anti-drone systems more effective?
- A: The interview did not specify particular improvements but highlighted the need for systems that can be easily deployed and operated by individual soldiers.
Q12: Do you think AI or machine learning could help enhance anti-drone systems?
- A: The potential for AI and machine learning to enhance these systems is recognized, with ongoing research into their application in anti-drone and anti-missile technology.
Q13: Is significant training required for personnel to effectively operate anti-drone systems?
- A: The level of training required varies, but there is a trend towards developing systems that require minimal training, allowing them to be used effectively straight out of the box.
Q14: How do these systems usually handle multiple drone threats or swarm attacks?
- A: Handling multiple threats involves a combination of detection, tracking, and engagement capabilities, which were not detailed in the interview.
Q15: How are these systems tested and validated before they are deployed in the field?
- A: Systems undergo rigorous testing and validation processes, often conducted by military personnel to ensure effectiveness under various operational conditions.
Interview 2: Interview with B.D. to specify requirements for system, Volunteer on Ukrainian front lines (volunteers and front line soldiers as potential users)
What types of anti-drone systems are currently used near the front lines?
- Mainly see RF jammers and basic radar systems deployed here
- Detection priority
- Nothing for specifically intercepting munitions dropped by drones
- Follow-up: What are general ranges for these RF jammers and other detection methods?
- Very different, RF detectors or radar can have ranges of 100 meters but very much depends on the conditions
- Rarely available consistently across positions
- Soldiers usually listen for drones but this can be difficult when moving or during incoming/outgoing fire
- Any RELIABLE range would be beneficial but to give enough time to react a range around 20 or 30 meters would probably be the minimum
- Follow-up: What are general ranges for these RF jammers and other detection methods?
What are key features that make an anti-drone system effective in the field?
- Needs to be easy to move and hide, no bulky shapes or protruding parts, light
- No space in backpacks, cars, trucks, already often full of equipment. Nothing you couldn’t carry and run with (estimate: 30kg)
- Pack-up and deployment speed: exfil usually over multiple hours of packing up for drone operator squads, but for front line troops, constant movement. Anything above 30 seconds is insufficient [and that seems to already by pushing it]
- No language barrier advisable [assuming: basically meaning no training]
- Detection range is quite critical. A few seconds need to be given for soldiers or others near the front lines to prepare for an incoming impact
- Follow-up: What kind of alarm to show drone detection would be best?
- [...] Conclusion: sound is only real option without immediately giving away position of system, drones will also often fly over positions so to alarm soldiers with lights would risk revealing their positions where the drone otherwise would have flown past. Sound already risky.
- Follow-up: What kind of alarm to show drone detection would be best?
What are the most significant limitations of current anti-drone systems?
- Apart from RF and some alternatives no real solution to drone drops on either side
- Systems that are light enough to be carried by one person and simple enough to operate without extensive training are rare if any exist at all
General Discussion on Day-to-Day Experiences
- Drone attacks very unpredictable. Periods of constant drone sounds above, then days of nothing. Constant vigilance necessary, which tires you out and puts you under serious pressure.
- According to people he has talked to, RF jammers and RF/radar-based detection useful but very difficult to counter an FPV drone with a payload if it is able to reach you and get close to you, but very environment-dependent, e.g. open field vs forest
- Despite the conditions, there's a shared sense of purpose between locals, volunteers, etc.
- Constant fundraising needed to gather enough funds to fix vehicles (such as recently the van shown on IG account), as well as equipment and supplies both for locals and soldiers
Users and their Requirements
We currently have two main usages for this project in mind, which are the following:
- Military forces facing threats from drones and projectiles.
- Privately-managed critical infrastructure in areas at risk of drone-based attacks.
From our literary review and our interviews, we have determined that users of the system will require the following:
- Minimal maintenance
- High reliability
- System should not pose additional threat to surrounding
- System must be personally portable
- System should work reliably in dynamic, often extreme, environments
- System should be scalable and interoperable in concept
Ethical Framework
The purpose of this project is to develop a portable defense system for neutralizing incoming projectiles, intended as a last line of defense in combat scenarios. In such circumstances, incoming projectiles may vary, and not all require neutralization, meaning decisions will vary based on specific conditions. Here, we focus on combat situations, such as war zones where environments evolve rapidly, necessitating adaptive equipment for soldiers. The system requires advanced software capable of distinguishing potential threats, such as differentiating between birds and drones or identifying grenades dropped from high altitudes[2]. In these contexts, it must adhere to the Geneva Conventions, which emphasize minimizing harm to civilians. This principle guides the need for the device to evaluate its impact on civilians and ensure that its actions do not indirectly or directly cause harm. In addition, if the device will be used in combat environments, it must work within the International Humanitarian Law. This law states that in a combat environment you are not allowed to target civilians[3].
Consider a scenario in which a kamikaze drone targets a military vehicle. The device could intercept the drone in one of two ways: by redirecting it or triggering it to explode above the ground. To solve this problem we take a look at the three major ethical trends: virtue ethics, deontology and utilitarianism. First of with virtue ethics, in short they will choose the path that a 'virtue person' would choose[4]. However, if the drone would be redirected too close to a populated area, or if an aerial explosion results in scattered fragments, there’s a potential risk for civilian casualties. Such scenarios introduce a moral dilemma similar to the "trolley problem"[5]. Activating the system may protect soldiers while risking civilian harm, while inaction leaves soldiers vulnerable. Therefore, the device must be capable of directing threats to secure locations, minimizing risk to civilians and aiming to maximize overall safety. In this scenario virtue ethics would not be able to assist in choosing wisely, because we do not know what a 'virtue person' would choose with regard to redirecting a drone. Deontology is build upon strict rules to follow[6]. This could result in our example in two different rules. The first rule could be 'Do redirect the drone', the second rule could then be 'Do not redirect the drone'. This is would be an easy solution to the problem, but we still remain with the problem which rule to choose. Utilitarianism suggests that maximizing happiness often involves minimizing harm[7]. This ethical path looks at the output of its actions. Therefore it should in fact already have anticipated on whether or not there would be casualties if it is redirected and how many. For now we assume that we are not able to anticipate on this because this would probably require massive amounts data and RAM storage. With our environment in mind, we can assume that at the place of impact of the drone there will be always casualties if the drone is not directed because if not, the device would not have to be at that location. If the drone is redirected the chances of people being outside the trenches is relatively small because this person would be in plain sight of the enemy. Therefore, from a utilitarianism point of view, over a period of time the least harm would thus have been caused if we consequently redirect the drone form the location of impact. Due to this the percentage of prevented casualties outweighs the percentage of 'made' casualties.
Specifications
To guide us in our design decisions, we have to create a set of specifications that reflect more specifically our users' needs. Our design decisions include our choice of drone detection and projectile interception, and using our knowledge gotten from our literary review and interviews, we can set up a list of SMART requirements. These requirements will help us design the best prototype for our users, while also keeping within the bounds of International Humanitarian Law, as predefined by our ethical framework.
ID | Requirement | Preference | Constraint | Category | Testing Method |
---|---|---|---|---|---|
R1 | Detect Rogue Drone | Detection range of at least 30m | Software | Simulate rogue drone scenarios in the field | |
R2 | Object Detection | 100% recognition accuracy | Detects even small, fast-moving objects | Software | Test with various object sizes and speeds in the lab |
R3 | Detect Drone Direction | Accuracy of 90% | Software | Use drones flying in random directions for validation | |
R4 | Detect Drone Speed | Accuracy within 5% of actual speed | Must be effective up to 20m/s | Software | Measure speed detection in controlled drone flights |
R5 | Detect Projectile Speed | Accurate speed detection for fast projectiles | Must handle speeds above 10m/s | Software | Fire projectiles at varying speeds and record accuracy |
R6 | Detect Projectile Direction | Accuracy within 5 degrees | Software | Test with fast-moving objects in random directions | |
R7 | Track Drone with Laser | Tracks moving targets within a 1m² radius | Must follow targets precisely within the boundary | Hardware | Use a laser pointer to follow a flying drone in real-time |
R8 | Can Intercept Drone/Projectile | Drone/Projectile is within the 1m² square | Must not damage surroundings or pose threat | Hardware | Test in a field, using projectiles and drones in motion |
R9 | Low Cost-to-Intercept | Interception cost under $50 per event | Hardware & Software | Compare operational cost per interception in trials | |
R10 | Low Total Cost | Less than $2000 | Should include all components (detection + net) | Hardware | Budget system components and assess affordability |
R11 | Portability | System weighs less than 3kg | Hardware | Test for total weight and ease of transport | |
R12 | Easily Deployable | Setup takes less than 5 minutes | Must require no special tools or training | Hardware & Software | Timed assembly by users in various environments |
R13 | Quick Reload/Auto Reload | Reload takes less than 30 seconds | Must be easy to reload in the field | Hardware | Measure time to reload net launcher in real-time scenarios |
R14 | Environmental Durability | Operates in temperatures between -20°C and 50°C | Must work reliably in rain, dust, and strong winds | Hardware | Test in extreme weather conditions (wind, rain simulation) |
Justification of Requirements
In the first requirement R1, Detect Rogue Drone, we say that we want a detection range of at least 30 meters. This range is crucial to provide soldiers or operators enough time to react to potential threats. According to B.D. (Interview 2), a detection range of 20–30 meters would be the minimum needed to give front-line soldiers a reliable chance to prepare for impact or take cover. This range aligns with the typical operational range of RF and radar detection systems, as discussed in The Rise of Radar-Based UAV Detection For Military, making it a practical and achievable requirement. For Object Detection R2, 90% recognition accuracy is preferred to prevent misidentifying other objects as drones, which could lead to wasted resources or unnecessary responses. Advances and Challenges in Drone Detection and Classification highlights the value of sensor fusion in improving detection accuracy, which would help the system distinguish drones from other objects in complex or cluttered environments. This accuracy is essential for military applications, where false detections could cause unnecessary alerts and distractions, and non-detections could leave the troops vulnerable. The requirements to Detect Drone Direction R3 with 90% accuracy and Detecting Drone Speed R4 are justified by the need to implement our projectile interception. Though from our interviews we know that most drones just drop projectiles while they are hovering, we have to be able to adapt to the scenario that the drone is moving while it drops the projectile, to calculate the trajectory of the projectile. Moreover, high-speed detection is necessary, as it can alert troops of the presence of a drone way before the drone drops a projectile, allowing them to take cover and/or shoot the drone down with their gun. Similarly, Detecting Projectile Speed R5 and Detecting Projectile Direction R6 with an accuracy of within 5 degrees allows the system to predict where a projectile is heading, and to calculate where the net should be shot to redirect the projectile. As suggested by F.W., focusing not only on drones but on any incoming projectiles within range is essential to providing comprehensive situational awareness and maximizing the system's value in dynamic and potentially high-risk environments. The requirement to Track Drone with Laser R7 is justified by the need for precise targeting, as well as a preemptive measure before the implementation of the interception is complete. A laser tracking system that tracks within a 1m² radius allows for monitoring the movement of drones, letting the troops know where the drone is. The ability to Intercept Projectiles R8 within a 1m² area ensures that the system can redirect the projectile using the 1m2 net designed for our system. In Interview 1, F.W. explained that while interception methods like jamming are often preferable, kinetic interception may still be necessary for drones that pose an immediate threat. A Low Cost-to-Intercept R9 of under $50 per event is essential to ensure that our a single troop or a group of troops can afford this system of last line of defense, as we want to make sure that the price to intercept a drone is as close if not lower than the cost of the drone, especially for military operations that may require frequent use. The article Countering the drone threat implications of C-UAS technology for Norway in an EU and NATO context discusses the importance of keeping interception costs low to ensure sustainability and usability in ongoing operations. By minimizing the cost per interception, the system remains practical and cost-effective for high-frequency use. Similarly, the requirement for Low Total Cost R10 (under $2000) ensures accessibility to smaller or single military units. B.D. noted that cost is a major constraint, especially for front-line volunteers who often rely on fundraising to support equipment needs. A lower total cost makes the system more widely deployable and achievable for those with limited budgets. Portability R11, with a target weight of under 3 kg, is crucial for ease of use and mobility. According to B.D., lightweight systems are essential for front-line soldiers, who have limited space and carry substantial gear. A portable system ensures that soldiers can transport it efficiently and integrate it into their equipment without compromising mobility or comfort. Ease of Deployment R12 is also essential, with a setup time of less than 5 minutes. B.D. emphasized that in unpredictable field environments, soldiers need a system that requires minimal setup. Quick deployment is highly important in dynamic situations where immediate action is required, allowing personnel to be prepared with minimal downtime. Quick Reload/Auto Reload R13capabilities, with a reload time of under 30 seconds, enable the system to handle multiple threats in rapid succession. This requirement addresses the feedback from F.W., who noted the importance of speed in high-risk areas. Fast reloading helps maintain the system’s readiness, preventing delays in the event of multiple drone or projectile threats. Lastly, Environmental Durability R14 ensures that the system operates reliably across a wide temperature range and in adverse weather conditions. The Rise of Radar-Based UAV Detection For Military stresses that systems used in real-world military applications must withstand rain, dust, and extreme temperatures. Durability in harsh environments increases the system's utility, ensuring it remains effective regardless of weather or climate conditions.
Detection
Drone Detection
The Need for Effective Drone Detection
With the rapid advancement and production of unmanned aerial vehicles (UAV), particularly small drones, new security challenges have emerged for the military sector.[8] Drones can be used for surveillance, smuggling, and launching explosive projectiles, posing threats to infrastructure and military operations.[8] Within our project we will be mostly looking at the threat of drones launching explosive projectiles. We have as an objective to develop a portable, last-line-of-defense system that can detect drones and intercept and/or redirect the projectiles they launch. An important aspect of such a system is the capability to reliably detect drones in real-time, in possibly dynamic environments.[9] The challenge here is to create a solution that is not only effective but also lightweight, portable, and easy to deploy.
Approaches to Drone Detection
Numerous approaches have been explored in the field of drone detection, each with its own set of advantages and disadvantages.[10][9] The main methods include radar-based detection, radio frequency (RF) detection, acoustic-based detection, and vision-based detection.[8][10] It is essential for our project to analyze these methods within the context of portability and reliability, to identify the most suitable method, or combination of methods.
Radar-Based Detection
Radar-based systems are considered as one of the most reliable methods for detecting drones.[10] Radar systems transmit short electromagnetic waves that bounce off objects in the environment and return to the receiver, allowing the system to detect the object's attributes, such as range, velocity, and size of the object.[10][9] Radar is especially effective in detecting drones in all weather conditions and can operate over long ranges.[8][10] Radars, such as active pulse-Doppler radar, can track the movement of drones and distinguish them from other flying objects based on the Doppler shift caused by the motion of propellers (the micro-Doppler effect).[8][9][10]
Despite its effectiveness, radar-based detection systems come with certain limitations that must be considered. First, traditional radar systems are rather large and require significant power, making them less suitable for a portable defense system.[10] Additionally, radar can struggle to detect small drones flying at low altitudes due to their limited radar cross-section (RCS), particularly in cluttered environments like urban areas.[10] Millimeter-wave radar technology, which operates at high frequencies, offers a potential solution by providing better resolution for detecting small objects, but it is also more expensive and complex.[10][8]
Radio Frequency (RF)-Based Detection
Another common method is detecting drones through radio frequency (RF) analysis.[8][9][10][11] Most drones communicate with their operators via RF signals, using the 2.4 GHz and 5.8 GHz bands.[8][10] RF-based detection systems monitor the electromagnetic spectrum for these signals, allowing them to identify the presence of a drone and its controller on these RF bands.[10] One advantage of RF detection is that it does not require line-of-sight, implying that the detection system does not need to have a view of the drone.[10] It can also operate over long distances, making it effective in a large pool of scenarios.[10]
However, RF-based detection systems do have their limitations. They are unable to detect drones that do not rely on communication with another operator, as in autonomous drones.[9] Also, the systems are less reliable in environment where many RF signals are presents, such as cities.[10] Therefore in situations where high precision and reliability are a must, RF-based detection might not be too suitable.
Acoustic-Based Detection
Acoustic detection systems rely on the unique sound signature produced by drones, patricularly the noise generated by their propellers and motors.[10] These systems use highly sensitive microphones to capture these sounds and then analyze the audio signals to identify the presence of a drone.[10] The advantage of this type of detection is that it is rather low cost and also does not require line-of-sight, therefore this type of detection is mostly used for detecting drones behind obstacles in non-open spaces.[10][8]
However, it also has its disadvantages. In environments with a lot of noise, as in a battlefields, these systems are not as effective.[9][10] Additionally, some drones are designed to operate silently.[9] Also, they only work on rather short range, since sound weakens over distance.[10]
Vision-Based Detection
Vision-based detection systems use camera, either in the visible or infrared spectrum, to detect drones visually.[8][10] These system rely on image recognition algorithms, often by use of machine learning.[10][11] Drones are then detected based on their shape, size and movement.[11] The main advantage of this type of detection is that the operators themselves will be able to confirm the presence of a drone, and are able to differentiate between a drone and other objects such as birds.[10]
However, there are also disadvantages when it comes to vision-based detection systems.[9][10] These systems are highly dependent on environmental conditions, they need a clear line-of-sight and good lightning, additionally weather conditions can have an impact on the accuracy of the systems.[9][10]
Best Approach for Portable Drone Detection
For our project, which focuses on a portable system, the ideal drone detection method must balance between effectiveness, portability and ease of deployment. Based on this, a sensor fusion approach appear to be the most appropriate.[10]
Sensor Fusion Approach
Given the limitations of each individual detection method, a sensor fusion approach, which would combine radar, RF, acoustic and vision-based sensors, offers the best chance of achieving reliable and accurate drone detection in a portable system.[10] Sensor fusion allows the strengths of each detection method to complement the weaknesses of the others, providing more effective detection in dynamic environments.[10]
- Radar Component: A compact, millimeter-wave radar system would provide reliable detection in different weather conditions and across long ranges.[9] While radar systems are traditionally bulky, recent advancements make it possible to develop portable radar units that can be used in a mobile systems.[10] These would most likely be less effective, therefore to compensate a sensor fusion approach would be used.[10]
- RF Component: Integrating an RF sensor will allow the system to detect drones communicating with remote operators.[10] This component is lightweight and relatively low-power, making it ideal for a portable system.[10]
- Acoustic Component: Adding acoustic sensors can help detect drones flying at low altitudes or behind obstacles, where rader may struggle.[8][10] Also this component is mainly just a microphone and the rest is dependent on software, and therefore also ideal for a portable system.[10]
- Vision-Based Component: A camera system equipped with machine learning algorithms for image recognition can provide visual confirmation of detected drones.[11][10] This component can be added by use of lightweight, wide-angle camera, which again does not restrict the device from being portable.[10]
Conclusion
To achieve portability in our system we have to restrict certain sensors and/or components, therefore to still achieve effectivity when it comes to drone detection, the best approach is sensor fusion. The system would integrate radar, RF, acoustic and vision-based detection. These together would compensate for each others limitations resulting in an effective, reliable and portable system.
Sensor Fusion
When it comes to detection, sensor fusion is essential for integrating inputs from multiple sensos types to achieve higher accuracy and reliability in dynamic conditions. Which in our case are radar, radio frequency, acoustic and vision-based. Sensor fusion can occur at different stages, with early and late fusion.[12]
Early fusion integrates raw data from various sensors at the initial stages, creating a unified dataset for procession. This approach captures the relation between different data sources, but does require extensive computational recourses, especially when the data of the different sensors are not of the same type, consider acoustic and visual data.[12]
Late fusion integrates the processed outputs/decisions of each sensor. This method allows each sensor the apply its own processing approach before the data is fused, making it more suitable for systems where sensor outputs vary in type. According to recent studies in UAV tracking, late fusion improves robustness by allowing each sensor to operate indepentently under its optimal conditions.[12][13]
Therefore, we can conclude that for our system late fusion is best suited.
Algorithms for Sensor Fusion in Drone Detection
- Extended Kalman Filter (EKF): EKF is widely used in sensor fusion for its ability to handle nonlinear data, making it suitable for tracking drones in real-time by predicting trajectory despite noisy inputs. EKF has proven effective for fusing data from radar and LiDAR, which is essential when estimating an object's location in complex settings like urban environments.[13]
- Convolutional Neural Networks (CNNs): Primarily used in vision-based detection, CNNs process visual data to recognize drones based on shape and movement. CNNs are particularly useful in late-fusion systems, where they can add a visual confirmation layer to radar or RF detections, enhancing overall reliability.[12][13]
- Bayesian Networks: These networks manage uncertainty by probabilistically combining sensor inputs. They are highly adaptable in scenarios with varied sensor reliability, such as combining RF and acoustic data, making them suitable for applications where conditions can impact certain sensors more than others.[12]
- Decision-Level Fusion with Voting Mechanisms: This algorithmic approach aggregates sensor outputs based on their agreement or “votes” regarding an object's presence. This simple yet robust method enhances detection accuracy by prioritizing consistent detections across sensors.[12]
- Deep Reinforcement Learning (DRL): DRL optimizes sensor fusion adaptively by learning from patterns in sensor data, making it particularly suited for applications requiring dynamic adjustments, like drone tracking in unpredictable environments. DRL has shown promise in managing fusion systems by balancing multiple inputs effectively in real-time applications.[12][13]
These algorithms have demonstrated efficacy across diverse sensor configurations in UAV applications. EKF and Bayesian networks are particularly valuable when fusing data from radar, RF, and acoustic sources, given their ability to manage noisy and uncertain data, while CNNs and voting mechanisms add reliability in vision-based and multi-sensor contexts. However, without testing no conclusions could be made on which algorithms can be applied effectively and which ones would work best.[12][13]
Interception
Protection against Projectiles Dropped by Drones
In modern warfare and security scenarios, small drones have emerged as cheap and effective tools capable of carrying payloads that can be dropped over critical areas or troops. Rather than intercepting the drones themselves—which would require a high-cost interception method—we shifted our focus to intercepting projectiles they drop, as these are usually artillery shells have an easily computable trajectory, and are far lighter and thus cheaper and easier to intercept. By targeting these projectiles as a last line of defense, our system provides a more cost-effective to neutralize potential threats only when necessary. This approach minimizes the resources spent on countering non-threatening drones while concentrating defensive efforts on imminent, high-risk projectiles, as a last resource for individual troops to remain safe.
Key Approaches to Interception
Kinetic Interceptors:
Kinetic methods involve the direct impact destruction or incapacitation of projectiles dropped by drones. These systems are designed for medium- to long-range distances and include missile-based and projectile-based interception systems. For example, the Raytheon Coyote Block 2+ missile is a kinetic interceptor designed to counter small aerial threats, such as drone projectiles. The Coyote’s design allows it to engage moving targets with precision and agility. Originally developed as a UAV, the Coyote has been adapted for use as a missile system, with each missile costing approximately $24,000, which is a disparagingly high cost to destroy relatively inexpensive threats like drone-deployed projectiles[14]. The precision and effectiveness of kinetic systems like the Coyote make them particularly valuable for high-priority and threatening targets, despite the high cost-to-intercept.
Electronic Warfare (Jamming and Spoofing)
Electronic warfare techniques, such as radio frequency jamming (RF) and GNSS jamming, disrupt the control signals of drones, causing them to lose connectivity to their controller or to the satellite signal. Spoofing, on the other hand, involves hijacking the communication system of the drone, giving it instructions that benefit you, such as releasing the projectile in a safe place. While jamming is non-lethal, it may affect other electronics nearby and is ineffective against autonomous drones that don’t rely on external signals. For example, DroneShield’s DroneGun MKIII is a portable jammer capable of disrupting RF and GNSS signals up to 500 meters away. By targeting a drone’s control signals, the DroneGun can cause the drone to lose connection, descend, or prematurely release its payload, which can then be intercepted by other defenses. However, RF jamming can interfere with nearby electronics, making it most suitable for use in remote or controlled environments, to minimize the collateral damage to your own electronic infrastructure. This system has demonstrated effectiveness in remote military applications and large open spaces where the risk of collateral interference is minimized[15][16].
Directed Energy Weapons (Lasers and Electromagnetic Pulses)
Directed energy systems like lasers and electromagnetic pulses (EMP) are designed to disable dropped projectiles by damaging electrical components or destroying them outright. Lasers provide precision and instant engagement with minimal collateral, although they are very expensive and lose effectiveness due to environmental conditions like rain or fog. EMP systems can disable multiple projectiles simultaneously but may interfere with other electronics in the vicinity. For example, Israel’s Iron Beam is a high-energy laser system developed by Rafael Advanced Defense Systems for intercepting aerial threats, including projectiles dropped by drones, but mainly other missiles. Unlike kinetic interceptors, Iron Beam offers a lower-cost engagement per interception. EMPs, on the other hand, provide a broad area effect, allowing simultaneous disabling of multiple projectiles. However, EMP systems may also disrupt nearby electronics, limiting their use in civilian-populated areas[17].
Net-Based Capture Systems
Net-based systems use physical nets to capture projectiles mid-flight, stopping them from reaching their target by redirecting them. Nets can be launched from ground platforms or other drones, effectively intercepting low-speed, low-altitude projectiles. This method is non-lethal and minimizes collateral damage, though it has limitations in range and reloadability. For example, Fortem Technologies’ DroneHunter F700 is a specialized drone designed to intercept other drones or projectiles by deploying high-strength nets. The DroneHunter captures aerial threats, stopping them from completing their intended path, thus minimizing potential damage. However, net-based systems have limitations in range and require reloading unless automated, which can slow response time during scenarios where the threat of drones dropping projectiles is constant[18].
Geofencing
Geofencing involves creating virtual boundaries around sensitive areas using GPS coordinates. Drones equipped with geofencing technology are automatically programmed to avoid flying into restricted zones. This method is proactive, preventing any drone to even get close to any troops, but can be bypassed by modified or non-compliant drones. DJI, a major drone manufacturer, has integrated geofencing technology into its drones, preventing its users from entering or flying in zones that are restricted zones. This feature allows DJI to choose what areas drones can and cannot enter, and provides a non-lethal preventive measure. However, geofencing requires drone manufacturer cooperation, and modified or non-compliant drones can bypass these restrictions, making it unreliable [19].
Objectives of Effective Drone Neutralization
When designing or selecting a drone interception system, several key objectives must be prioritised:
- Low Cost-to-Intercept: Interception costs are critical, as small, cheap drones can carry projectiles that may cost significantly less than the interception method itself. Low-cost systems, like net-based options, are preferred for frequent engagements. Conversely, more expensive kinetic methods may be necessary for high-speed or armored projectiles. Raytheon’s Coyote missiles is an example of the cost tradeoff of kinetic systems and highlight the economic considerations that come into play in military versus civilian contexts[14].
- Portability: Interception systems should ideally be lightweight, collapsible, and transportable across various settings. Portable systems, like the DroneGun MKIII jammer and net-based launchers, enable rapid setup and adaptability to various operational environments, making them valuable in mobile defense scenarios[15].
- Ease of Deployment: Quick deployment is essential in dynamic scenarios like military operations or large-scale events. For example, drone-based net systems and RF jammers mounted on mobile units offer flexible deployment options, allowing for rapid response in fast-moving situations[16].
- Quick Reloadability or Automatic Reloading: In high-threat environments, interception systems with rapid or automated reloading capabilities ensure continuous defense. Systems like lasers and RF jammers support quick re-engagement, while net throwers and kinetic projectiles may require manual reloading, potentially reducing efficiency in sustained threats[20].
- Minimal Collateral Damage: In urban or civilian areas, minimizing collateral damage is of utmost importance. Non-lethal interception methods, such as jamming, spoofing, and net-based systems, provide effective solutions that neutralize threats without excessive environmental or infrastructural impact. Systems like the Fortem DroneHunter F700 illustrate the potential for non-destructive interception in urban areas[18].
Evaluation of Drone Interception Methods
Pros and Cons of Drone Interception Methods
- Jamming (RF/GNSS)
- Pros: Jamming effectively disrupts communications between drones and operators, often forcing premature payload drops. Non-destructive and widely applicable, jamming can target multiple drones simultaneously, making it well-suited to civilian defense applications[16].
- Cons: Jamming is limited in effectiveness against autonomous drones and can interfere with nearby electronics, posing a risk in urban areas where collateral electronic disruption can impact civilian infrastructure.
- Net Throwers
- Pros: Non-lethal and environmentally safe, nets can physically capture projectiles without destroying them, making them ideal for urban settings where collateral damage is a concern.
- Cons: Effective primarily against slow-moving, low-altitude projectiles, net throwers require manual reloading between uses, which limits their response time during sustained threats unless automated.
- Missile Launch
- Pros: High precision and range, missile systems like Raytheon's Coyote are effective for engaging fast-moving or long-range targets and are ideal in military settings for large-scale aerial threats[14].
- Cons: High cost per missile, risk of collateral damage, and infrastructure requirements restrict missile use to defense zones rather than civilian settings.
- Lasers
- Pros: Laser systems are silent, precise, and capable of engaging multiple targets without producing physical debris. This makes them valuable in urban environments where damage control is essential[20].
- Cons: Lasers are costly, sensitive to environmental conditions like fog and rain, and have high energy demands that complicate portability, limiting their field application[17].
- Hijacking
- Pros: Allows operators to take control of drones without destroying them. It’s a non-lethal approach, ideal for situations where it’s essential to capture the drone intact.
- Cons: Hijacking poses collateral risks for surrounding electronics, has limited range, and is operationally complex in active field environments, requiring specialized training and equipment.
- Spoofing
- Pros: Non-destructively diverts drones from sensitive areas by manipulating signals, suitable for deterring drones from critical zones[16].
- Cons: Technically complex and less effective against drones with advanced anti-spoofing technology, requiring specialized skills and equipment.
- Geofencing
- Pros: Geofencing restricts compliant drones from entering sensitive zones, creating a non-lethal preventive barrier that offers permanent coverage[19].
- Cons: Reliance on manufacturers for integration and potential circumvention by modified drones limits geofencing as a standalone defense measure in high-risk scenarios.
Pugh Matrix
The Pugh Matrix is a decision-making tool used to evaluate and compare multiple options against a set of criteria. By systematically scoring each option across various criteria, the Pugh Matrix helps to identify the most balanced or optimal solution based on the chosen priorities. In this report, I created a Pugh Matrix to assess different interception methods for countering projectiles dropped by drones.
Each method was evaluated across seven key criteria: Cost-to-Intercept, Portability, Ease of Deployment, Reloadability, Minimum Collateral Damage, and Effectiveness. For each criterion, the methods were scored as Low (1 point), Medium (2 points), or High (3 points), reflecting their relative strengths and limitations in each area. The scores were then totaled to provide an overall assessment of each method’s viability as a counter-projectile solution. This approach enables a comprehensive comparison, highlighting methods that provide a balanced combination of cost-efficiency, ease of use, and effectiveness in interception.
Method | Minimal Cost-to-intercept | Portability | Ease of Deployment | Reloadability | Minimum Collateral Damage | Effectiveness | Total Score |
---|---|---|---|---|---|---|---|
Jamming (RF/GNSS) | Medium | High | High | High | High | Medium | 10 |
Net Throwers | High | High | High | Medium | High | High | 11 |
Missile Launch | Low | Low | Medium | Low | Low | High | 5 |
Lasers | High | Medium | Medium | High | High | High | 8 |
Hijacking | High | High | Medium | Low | High | Medium | 8 |
Spoofing | Medium | High | Medium | Medium | High | Medium | 8 |
Geofencing | High | High | High | High | High | Low | 10 |
The resulting scores can be found in the Pugh Matrix below, where Net Throwers scored the highest with 11 points, indicating strong performance across several criteria, particularly in minimizing collateral damage and cost-effectiveness. Other methods, such as Jamming and Geofencing, also scored well, while missile-based solutions, despite high effectiveness, scored lower due to high costs and limited portability. Consequently, we will be focusing on Net Throwers as our main interception mechanism.
Path Prediction of Projectile[21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39]
Theory:
Catching a projectile requires different steps. At first the particle has to be detected, after which its trajectory has to be determined. If we know how the projectile is moving in space and time the net can be shot to catch the projectile. However based on the distance of the projectile it takes different amounts of time for the net to reach the projectile. In this time the projectile has moved to a different location. So the net must be shot to a position where the projectile will be in the future such that they collide.
Since projectiles do not make sound and do not emit RF waves, they are not as easy to detect as drones. For this part the assumption will be made that the projectile is visible. Making the system also detect projectiles which are not visible would probably be possible but this would complicate things a lot. The U.S. army uses electronic fields which can detect bullets passing. Something similar could be used to detect projectiles which are not visible, but this will not be done in this project due to the complexity.
To detect a projectile a camera with tracking software is used. Drones will have to be detected by this camera. This will be done by training an AI model for drone detection.
Now that the projectile is in sight its trajectory has to be determined. The speed of the projectile should only slow down due to friction in the air and speed up due to gravity. For a first model air friction can be neglected to get a good approximation of the flight of the projectile. Since not every projectile has the same amount of air resistance, the best friction coefficient should be found experimentally by dropping different projectiles. The best friction coefficient is when most projectiles are caught by the system. An improvement for this is to have different pre-assigned values for friction coefficients for different sizes of projectiles. Since surface area plays a big role in the amount of friction a projectile experiences, this is a reasonable thing to do.
With the expected path of the projectile known, the net can be launched to catch the projectile midair. Basic kinematics can give accurate results for these problems. Also, the problem can be seen as a 2D problem. Since we only protect against projectiles coming towards the system, we can always define a plane such that the trajectory of the projectile and the system are in the same plane, making the problem two dimensional. If the path of the projectile would exceed the plane and become a three dimensional problem the system does not need to protect against this projectile as it does not form a threat, since the system is in the (2D) plane.
Calculation:
A Mathematica script is written which calculates at this angle the net should be shot. The script now makes use of made up values which have to be determined experimentally based on the hardware that is used. For example, the speed at which the net is shot should be tested and changed in the code to get a good result. The distance and speed of the projectile can be determined using sensors on the device. The output of the Mathematica script is shown in figure PP1. It gives the angle for the net to be shot at as well as a trajectory of the net and projectile to visualize how the interception will happen.
Accuracy:
The height at which projectiles are dropped can be estimated by looking at footage of projectiles dropping. The height can be easily determined by assuming the projectile falls in vacuum, this represents reality really well. The height is given by: 0.5*g*t^2. Using a YouTube video[40] as data, it can be seen that almost every drop takes at least 4 seconds. This means that the projectiles are dropped from at least 78.5m. If we catch the projectile at two thirds of its fall, still having plenty of height to be redirected safely, and the net is 1 by 1 meter (so half a meter from its center to the closest side of the net), the projectile must not be dropped more than 0.75 meter (See figure PP2 for the calculation) next to us (outside of the plane) since the system would not catch this, if everything else went perfect. Even if the projectile would be dropped 0.7 meter next to the device, the net would hit the projectile with the side, which does not guarantee that the projectile will stay in the net.
An explosive projectile will do damage even when 0.75 meters away from a person. This means that the previously made assumption, where it was assumed that a 2D model would be fine, since everything happens in a plane, does not fulfill all our needs. Enemy drones are not accurate to the centimeter, since explosive projectiles, like grenades, can kill a person even when 10 meters away. This means that for better results a 3D model should be used.
3D model:
It was tried to replicate the 2D model in 3D, but this did not work out with the same generality. For this reason some extra assumptions were made. These assumptions are based on reality and therefor still valid for this system. The only thing this changes is the generality of the model, where it could be used in multiple different cases instead of only projectiles dropped from drones.
In the 2D model the starting velocity of the projectile could be changed. However, in reality, drones do not have a launching mechanism and simply drop the projectile without any starting velocity. This means that the projectile will drop straight down (except some sideways movement due to external forces like wind). This was noted after watching lots of drone warfare video footage, where it was also noted that drones do not usually crash into people, but mainly into tanks since for tanks an accurate hit is required between the base and the turret of the tank. For people, drones drop the projectile from a standstill (to get the required aim). This simplification also makes the 2D model valid again, since there is no sideward movement in the projectile, it will never leave the plane and we can create between the path of the projectile and the mechanism which shoots the net.
Since this mechanism works in the real world (3D), it is decided to plot the 2D model at a required angle in 3D so there is a good representation of how the mechanism will work. The new model also gives the required shooting angle and then it shows the path of the net and projectile in 3D. To get further insight, the 2D trajectory of the net and projectile is also plotted, this can be seen in figure PP3.
Accuracy 3D model:
The 3D model which is now set up only works in a “perfect” world, where there is no wind, no air resistance or any other external factors which may influence the path of the projectile and the net. Also we assume that the system knows where the drone is with infinite accuracy. This is in reality simply not true, but it is important to know how close this model replicates reality and if it can be used.
Wind plays a big role in the path of the projectile and of the net. It is important that the model also functions under these circumstances. In order to determine the acceleration of the projectile and net the drag force on both objects must be determined. Two important factors where the drag force depends on are the drag coefficient and the frontal area of the objects. Since different projectiles are used during warfare, like hand grenades or Molotov cocktails, it is unknown what the exact drag coefficient or frontal area is or the projectile. After a dive in literature it was decided to take an average value for the drag coefficient and the frontal area since these values lied on the same spectrum. For the frontal area this could be predicted since the drones are only able to carry objects of limited size. After some calculations (see figure PP4) it was found that if the net (including the weights on the corners) weighs 3kg, the acceleration of the projectile and net due to wind effects is identical leading to still a perfect interception, which can be seen in figure PP5. This is based on literature values, for a later stage it is necessary to find the exact drag coefficient and surface area of the net and change the weight accordingly. As for projectiles which do not exactly satisfy the found drag coefficient or surface area, it is found with the use of the model that differences up to 50% of the used values do not influence the projectile so much that the interception misses. This range includes almost all theoretical values found for the different projectiles, making the model highly reliable under the influence of wind.
An uncertainty within the system is the exact location of the drone. We aim to accurately know where the drone, and thus the projectile is, but in reality this infinite accurate location is unachievable, but we can get close. The sensors in the system must be optimized to locate the drone as good as possible. Luckily there are sensors which are able to achieve high accuracy, for example a LiDAR sensor which has a range of 2000m and is accurate to 2cm. The 2000m range is well within the range that our system operates and the accuracy of 2cm is way smaller than the size of the net (100cm by 100cm) which should not cause problems for the interception.
Mechanical Design
Introduction
To thoroughly detail the mechanical design and the design choices made throughout the design process, each component will be discussed individually. For each component, a brief description will be provided as to why this feature is necessary in the system, followed by a number of possible component design choices, and finally which design choice was chosen and why. To conclude, an overview of the system as a whole will be given along with a short evaluation and some next steps for improvements to the system beyond the scope of this project.
Kinetic Interception Charge (A)
Feature Description
Kinetic interception techniques include a wide range of drone and projectile interception strategies that, as the name suggests, contain a kinetic aspect, such as a shotgun shell or a flying object. This method of interception relies on a physical impact or near-impact knocking the target off course or disabling it entirely. The necessity for kinetic interception capabilities was made clean during the first target market interview, in which the possibility of jamming and (counter-)electronic warfare was discussed and the consequent necessity to be able to intercept drones and dropped projectiles in a non-electronic fashion.
Possible Design Choices
- Shotgun shells
- Using a mounted shotgun to kinetically intercept drones and projectiles is a possibility, proven also by the fact that shotguns are used as last lines of defense by troops at fixed positions as well as during transport [41]. Shotguns are also relatively widely available and the system could be designed to be adaptable to various shotgun types. However, the system would be very heavy and would need to be able to withstand the very powerful recoil of a shotgun firing.
- Net charge
- A net is a light-weight option which can be used to cover a relatively large area-of-effect consistently. This would make the projectile or drone much easier to hit, and thus contribute to the reliability of the system. The light weight would allow it to be rotated and fired faster than heavier options, as well as reducing the weight of any additional charges carried for the system, a critical factor mentioned in both the first and the second target market interviews.
- Air-burst ammunition
- This type of ammunition is designed to detonate in the air close to the incoming projectile or drone. This ammunition is very effective at preventing these hazards of reaching their targets, however very sharply increases the cost-to-intercept, a concept which was also previously introduced in the interviews. Furthermore, it is the most expensive out of all three of these options, which makes it less suitable for the application of this system.
Selected Design
The chosen kinetic interception charge comprises a light-weight net of a square surface area of 1m^2. The net has thin, weighted disks on the end, which are stacked on top of one another to form a cylindrical charge which can be loaded into the spring-loaded or pneumatically-loaded turret. This charge is then fired at incoming targets, and when done so, the net spreads out and catches the projectile where the path prediction algorithm calculates it will be by the time the net reaches it. This decision is based on the net's light weight, low cost-to-intercept, quick reloading capability, and reliability for the diversion of a projectile caught within its surface area.
Rotating Turret (B)
Feature Description
The rotating turret system is designed to provide a 360-degree rotation freedom for the attached net charge, allowing for the engagement of threats from any direction. Not just does the turret rotate 360 degrees in the horizontal plane, but also 90 degrees in the vertical plane, i.e. from completely horizontal to completely vertical. This capability is critical for obvious reasons, but especially in the context of fast-paced wars with ever-changing frontlines, such as the one in Ukraine, because of the risk of drone attacks coming from all around any given position. The versatility of a rotating turret enhances the system's ability to respond to these aerial threats significantly. Two stepper motors fitted with rotary encoders rapidly move the turret, one for each of the planes of rotation. The encoders serve to keep the electronics within the system aware of the current orientation of the net charge and to move it to the correct position once a threat has been detected.
Sensor 'Belt' (C)
Feature Description
The sensor ring around the bottom of the turret contains slots for three cameras and microphones. Furthermore, it and the main body interior can be adapted to house further sensor components, such as radar, thermal imaging, RF detectors and other examples. To have a range of sensors fitted to the system plays an important role in ensuring threats of different types can be detected, as well as providing a critical advantage in the effectiveness of the system in various weather and other environmental conditions, such as the density of trees and foliage around the system, or whether there is fog or smoke in the vicinity.
Possible Design Choices
- Sensors
- Camera
- Best choice for open areas, as well as any situation where there is a direct line of sight to the drone, and consequently the projectile it drops. Three cameras each covering 120 degrees of space would be combined to provide a view of the entire 360 degree environment. However, as soon as line of sight is broken, the cameras alone are insufficient to detect drones effectively.
- Microphone
- Fitting the system with microphones compliments the use of cameras effectively. Even when line of sight is broken, the microphone would be able to pick up the sound of the drone flying. This is done by actively performing frequency analysis on the audio recorded by the microphones and checking whether sound frequencies that are typically related to drones flying, including those given off by small motors which typically power the rotor blades of the drone. If these frequencies are significantly present, the microphone interprets the sound as the detection of a drone nearby. A shortcoming of the microphone is that it has a relatively small range, and will work less well with background noise, such as loud explosions or frequent firing.
- Radar
- While the range of radar sensors is typically very large, there is a significant limitation to this type of detection for small and fast FPV drones. The detectable 'radar cross-section' of the FPV drones is very small, and often radars are designed and calibrated to detect much larger objects, such as fighter jets or naval vessels. This means a specialized radar component would be required, which however would prove to be very expensive and likely difficult to source if a high-end option were necessary. However, some budget options are available and are discussed above this in the wiki. Finally, an additional advantage radar components could possibly provide is the detection of larger and slower surveillance drones flying at a higher altitude. To detect these and warn the soldiers of their presence would allow them to better prepare for incoming surveillance, and consequently also an influx of FPV drone attacks on their position if they are found.
- RF Detector
- The RF detector is a very useful device which senses the radio frequencies (RF) of drones, drone controllers, and other electronic equipment. Analyzing these frequencies for those typically used by drones to communicate with their pilots, they can quickly be detected if they are in the vicinity. In theory, this could also be used to block that signal and try to prevent the controlling of the drone near the system.
- Camera
- Sensor implementation
- Sensors implemented directly onto the moving turret
- This option fixes the sensors relative to the turret and the horizontal orientation of the net charge. This would mean that the turret would rotate until the cameras detect the drone at a predetermined 'angle zero', which aligns with where the turret points.
- Ring/Belt around the central turret
- With a fixed ring around a moving turret, the cameras (and other sensors) and the turret are not fixed relative to one another. Instead, the sensors are fixed with respect to the rest of the system, and thus, with the ground/environment
- Sensors implemented directly onto the moving turret
Selected Design
Having the sensors directly implemented into the turret has the significant downside that the cameras would be moving during the process of aiming the turret. This means that rather than the cameras and other sensors being able to provide a constant feedback for a stationary drone, their output would change as the turret rotates to aim towards the drone. Furthermore, the image received by the cameras would likely be blurred during this process of rotation, decreasing the chance of accurate detection and tracking. Therefore, the fixed ring of sensors was chosen. Furthermore, while a radar sensor provides long-range detection benefits, the most critical sensors for FPV detection would consist of the remaining three: camera, microphone, RF detector. These would therefore be the primary choices for a minimum-cost system.
Main Body (D)
Feature Description
The main body simply consists of a short cylinder with diameter 275mm. This body houses the main electronics of the system and is made of 3D-printed PETG material, with the possibility of adding an additional aluminium frame for additional stability, with the expense of additional weight. The main body features a removable bottom (twist and pull out lid) to provide access to the electronics in case maintenance is necessary. Its size has been limited to allow for easier transport, as well as complex parts having been avoided to allow for easy replacement and quick understanding of the system, helping to avoid the need for additional training. The bottom lid also contains a bracket which allows the upper system to slide and fix onto the tripod rail below in a quick and simple manner. This provides the opportunity for fast attachment and detachment and allow the system to be quickly dismantled if necessary, another important factor discussed in the aforementioned target market interviews.
Tripod and Tripod Connector (E and F)
Feature Description
The base of the system, as well as a connecting element between the base and the rest of the system, had a number of requirements from target users. Firstly, the system should be easy to dismantle in case positions need to be changed quickly. Secondly, it should be easily transportable, with considerations for not just the size but also the weight of the components. Finally, the base should be able to accommodate various environments, meaning various ground conditions.
Possible Design Choices
- Base
- Fixed legs (like a chair or table)
- While fixed legs provide the opportunity to add additional stability to the system through reinforced materials in the legs of the system, these cannot be adapted for different conditions on the ground. For example, if uneven ground is present, the system would not be able to correct for this. Furthermore, solid legs for additional support would significantly increase the weight of the system. Lastly, if the legs are of a fixed shape, they cannot be folded and packed away into a smaller volume, making transportation more difficult.
- Collapsible tripod
- A collapsible tripod has a number of advantages and disadvantages. A first disadvantage over fixed legs is the reduction in strength, further exacerbated by the collapsible joints of the legs likely weaking the structure relative to their fixed counterparts. However, the adaptability of a tripod to uneven ground and other potentially difficult ground conditions make it very useful. It is also much easier to transport given the possibility to reduce the bounding volume the base of the system would make up during transport.
- Fixed legs (like a chair or table)
- Base-to-system connection
- Quick-release mechanism
- While contributing to a fast disassembly through the rapid process of quick-release mechanisms opening and closing, these mechanisms can often be more complex than necessary, essentially trading a fast mechanism for a slightly more elaborate build, involving multiple moving parts, springs, and so on. This increases the risk of components getting stuck during use, especially in muddy, wet or otherwise interfering environments.
- Rail-slide
- The rail connection is a simple solution which balances well the requirement for a simple mechanism that minimizes the risk of jamming or malfunction with the requirement for a fast connection and disconnection when necessary. It requires no tools, nor any additional training to use.
- Screws
- Most stable option but definitely not viable considering the need to be able to rapidly assemble and disassemble the system in active combat zones. Furthermore, this would require users to constantly carry around a screwdriver, and if it gets lost, there would be some significant issues with disassembly.
- Quick-release mechanism
Selected Design
The rail-slide mechanism was selected due to its suitable balance between a simple and robust design with a mechanism that will allow for fast deployment and disassembly in situations where every second counts. With this connection mechanism, as well as the use of the highly adaptable tripod design for the base, allow the system to be deployed quickly in a wide range of ground conditions without requiring training or time investments into setting up and dismantling the system. Both design choices also make sure that transportation is as easy as possible.
Evaluation and Next Steps
With an estimated weight of around 27kg (based on materials proposed and their components' volumes, as well as similar, existing product's weights) the weight goal has been reached. However, this comes at the cost of sacrificing more sturdy and robust materials in some places. However, it must be said that the system would fit in a regular backpack, and is quickly disassembled (timed disassembly of 3D-printed model in ~11 seconds comprising removing rail slide, disconnecting sensor ring, turret and main body from one another and placing into a backpack) which completes another requirement. Furthermore, the speed of the system was demonstrated through the use of a prototype using lower-grade motors as a proof-of-concept design, achieving a maximum rotation duration (360 degrees horizontal, 90 degrees vertical) in around 1.5 seconds, providing clear evidence for the validity of such a system, especially when constructed in its full form.
While not all sensors could be integrated, and some other limitations were present such as the weight estimate nearing the goal set by aforementioned requirements, there are now clear steps on how this system design could be improved in the future. Firstly, a fully functional, full-scale model of the system should be printed, assembled and tested using a spring-loaded net charge to perform more in-depth mechanical tests such as the impact of continuous use of the system on various joints and stress concentrations throughout its body, as well as how effective the tripod really is in ground conditions that are not flat and dry, but possibly uneven, muddy, sandy, or otherwise. Furthermore, it should be tested how the system affects the endurance of someone carrying one, or even two, and how realistic it is that it is transported by a single individual over prolonged periods of time, besides estimations of this fact based on the system's weight. Based on the findings of these studies, the mechanical design could be reiterated and further improved in the future.
Prototype
Components Possibilities
Radar Component (Millimeter-Wave Radar):
Component: Infineon BGT60ATR12C
- Price: Around €25-30
- Description: A 60 GHz radar sensor module, compact and designed for small form factors, ideal for detecting the motion of drones.
- Software: Infineon's Radar Development Kit (RDK), a software platform to develop radar signal processing algorithms.
- Website: https://www.infineon.com/cms/en/product/sensor/radar-sensors/radar-sensors-for-automotive/60ghz-radar/bgt60atr24c/
Component: RFBeam K-LC7 Doppler Radar
- Price: Around €55
- Description: A Doppler radar module operating at 24 GHz, designed for short to medium range object detection. It’s used in UAV tracking due to its cost-efficiency and low power consumption.
- Software: Arduino IDE or MATLAB can be used for basic radar signal processing.
RF Component:
Component: LimeSDR Mini
- Price: Not deliverable at the moment
- Description: A compact, full-duplex SDR platform supporting frequencies from 10 MHz to 3.5 GHz, useful for RF-based drone detection.
- Software: LimeSuite, a suite of software for controlling LimeSDR hardware and custom signal processing.
Component: RTL-SDR V3 (Software-Defined Radio)
- Price: Around €30-40
- Description: An affordable USB-based SDR receiver capable of monitoring a wide frequency range (500 kHz to 1.75 GHz), including popular drone communication bands (2.4 GHz and 5.8 GHz). While not as advanced as higher-end SDRs, it’s widely used in hobbyist RF applications.
- Software: GNU Radio or SDR# (SDRSharp), both of which are open-source platforms for signal demodulation and analysis.
Acoustic Component:
Component: Adafruit I2S MEMS Microphone (SPH0645LM4H-B)
- Price: Around €7-10
- Description: A low-cost MEMS microphone offering high sensitivity, commonly used in sound detection projects for its clarity and noise rejection.
- Software: Arduino IDE or Python with SciPy for sound signature recognition.
- Website: https://www.adafruit.com/product/3421
Component: DFRobot Ferminion MEMS Microphone Module - S15OT421(Breakout)
- Price: Around €4
- Description: A low-cost MEMS microphone offering high sensitivity, commonly used in sound detection projects for its clarity and noise rejection.
- Software: Arduino IDE.
- Website: https://www.dfrobot.com/product-2357.html
Vision-Based Component:
Component: Arducam 12MP Camera (Visible Light)
- Price: Around €60
- Description: A lightweight, high-resolution camera module, ideal for machine learning and visual detection.
- Software: OpenCV combined with machine learning libraries like TensorFlow or PyTorch for object detection and tracking.
- Website: https://www.arducam.com/product/arducam-12mp-imx477-mini-high-quality-camera-module-for-raspberry-pi/
Component: Raspberry Pi Camera Module v2
- Price: Around €15-20
- Description: A small, lightweight 8MP camera compatible with Raspberry Pi, offering high resolution and ease of use for vision-based drone detection. It can be paired with machine learning algorithms for object detection.
- Software: OpenCV with Python for real-time image processing and detection, or TensorFlow for more advanced machine learning applications.
- Website: https://www.raspberrypi.com/products/camera-module-v2/
Component: ESP32-CAM Module
- Price: Around €10-15
- Description: A highly affordable camera module with a built-in ESP32 chip for Wi-Fi and Bluetooth, ideal for wireless image transmission. It’s a great choice for low-cost vision-based systems.
- Software: Arduino IDE or Python with OpenCV for basic image recognition tasks.
- Website: https://www.tinytronics.nl/en/development-boards/microcontroller-boards/with-wi-fi/esp32-cam-wifi-and-bluetooth-board-with-ov2640-camera
Sensor Fusion and Central Control
Component: Raspberry Pi 4
- Price: Around €35
- Description: A small, affordable single-board computer that can handle sensor fusion, control systems, and real-time data processing.
- Software: ROS (Robot Operating System) or MQTT for managing communications between the sensors and the central processing unit.
- Website: https://www.raspberrypi.com/products/raspberry-pi-4-model-b/
Component: ESP32 Development Board
- Price: Around €10-15
- Description: A highly versatile, Wi-Fi and Bluetooth-enabled microcontroller. It’s lightweight and low-power, making it ideal for sensor fusion in portable devices.
- Software: Arduino IDE or Micropython, with MQTT for sensor data transmission and real-time control.
- Website: https://www.espressif.com/en/products/devkits
Testing of Prototype components
To verify whether or not we have achieved some of these requirements, we have to devise some test scenarios, which will allow us to quantitatively determine our prototype's accuracy. The notion of accuracy of course may be ambiguous, as we do not have access to test our prototype in a warzone-like environment, and thus accuracy in a lab may not result in accuracy in the trenches. However, we will attempt to simulate such an environment through the use of a projector, as well as through the use of speakers.
Component testing process
In this section we will show and explain how the microphone and the camera are able to detect drones and if these components meet our requirements.
Acoustic method
In order to detect drones out of a recorded sound we first of all need a reference sound to later on compare our recorded sound with. Our reference sound is obviously the sound of the drone. This sound file can be plotted as an audio plot with on the x-axis time and on the y-axis shows the signal’s strength in arbitrary units (Fig 15.1). After using the Fourier transform and normalising this, a plot in the frequency domain can be obtained. In this plot the frequency is on the x-axis and normalised strength on the y-axis (Fig 15.2). The same can be done for the recorded sound which we want to test. In the analysis of the sounds the frequency domain will be reduced to the domain 500 Hz-3000 Hz because our testing drones and the used background noises fall in this domain. The last thing left to do is search in the recorded normalised frequency domain plot (Fig 15.3) for the ‘global shape’ of the normalised frequency domain plot (Fig 15.2) of the reference sound. This part can be done in multiple strategies. I will first explain how this method works and then explain why we choose this method.
Our goal is to have a set with different frequencies which represent the ‘fingerprint’ of the normalised frequency domain plot of the drone. We obtain this by setting a threshold value (λ) equal to 1. By lowering λ with small values and marking the frequency peaks we encounter with red dots, we can obtain a set with a total of ‘n’ ‘peak’ frequencies representing this specific sound. The specific value of ‘n’ is with trial and error set to the optimal value of 100 for the reference sound and ‘n’ is set to 250 for the recorded sound. This is plotted in (Fig 15.4) for the drone sound and (Fig 15.5) for the recorded sound. Now two sets with frequencies are obtained representing both their own sounds and the only thing left for us to do is to compare these sets. This is done by (Eq 15.1). So for each frequency of the drone reference sound we will check if there is frequency in the recorded set such that the absolute value of the subtraction of the two is smaller than the tolerance. The tolerance is by trial and error set to 2 Hz. This is needed because the Doppler effect might play a role if the drone moves or because the microphone might pick up frequencies slightly different. The number of frequencies that meet the requirement of (Eq 15.1) (m), can thus lay between 0 and 100, where m=100 is perfect recognition of the drone. We have chosen this method because it is an intuitive concept and relatively easy to implement. In addition, the program is computationally very light and is thus quickly analysable. This number m needs to be transformed to a percentage which represents the change of a drone being present. The most simple conversion would be a linear relationship. However, after conducting a few experiments with varying distances with and without background noise the relationship did not seem to be linear. Until roughly m=20 we know with great certainty that there is not a drone, because in regular background noise often a few peaks will correspond. In the same way, when m=90 we know with relatively good certainty that there is a drone present but the experiments concluded that, especially with background noise, often not all frequencies from the drone set are reflected in the recorded sound frequency set. Therefore the domain of u The ‘Logistic Formula’ (Eq 15.2) seems to be able to fit this quite well. In the program we have taken a=0.16 and b=55. The value of ‘b’ reflects the number of corresponding peaks at which the certainty of a drone being present is 50%. The number 55 is chosen because this is the middle of the uncertain domain. The values of ‘a’ and ‘b’ are unfortunately not fully validated, as we did not have the proper equipment and the proper amount of time to conduct a thorough experiment.
Visual method
Detecting drones accurately is crucial for various applications, including security and surveillance. While acoustic detection methods provide reliable results by identifying drone sounds in noisy environments, they can be limited in range or accuracy under certain conditions,. To complement acoustic detection, we implemented a visual detection system using the YOLOv8 object detection model, a state of the art computer vision model, and trained it using an online dataset of drone images to then detect drones in real-time.
The first step was data preparation, which required gathering a comprehensive dataset of drone images captured under varied conditions, including different backgrounds, distances, and angles. Each image in the dataset was manually labeled by drawing bounding boxes around drones, precisely marking their position within each image. Thankfully, we found a drone dataset that already included an XML file for each photo, bounding the drone in that photo. A training configuration file was then created in the YOLO format, specifying the paths to the dataset images and labels, as well as information about the object class—drones in this case. With the dataset ready, I moved to model initialization. To expedite the training process and improve accuracy, I started with the YOLOv8x model initialized with pre-trained weights from the COCO (Common Objects in Context) dataset, a large object detection dataset that was already pretrained by the YOLOv8x model. These weights provided a strong foundation, allowing the model to begin training with general object recognition capabilities, which were subsequently refined to specialize in detecting drones. The pre-trained model's understanding of common object features (such as edges, shapes, and textures) facilitated faster learning and improved accuracy when adapted to the task of drone detection. The training process involved configuring the model with optimized parameters. A moderate image size of 640x640 pixels was selected to balance detection accuracy with training speed. Training was conducted for approximately 32 epochs, which allowed the model sufficient time to learn the nuances of drone features and improve detection accuracy. Throughout training, the model minimized a loss function comprising three main components: bounding box accuracy, object presence probability, and class confidence. By refining these aspects, the model learned to distinguish drones from other objects and background noise (e.g., leaves). Upon completion, the model’s best-performing checkpoint was saved locally for later use. Before we moved on to real-time detection of drones, we decided to test our model on some videos we found on the internet, seeking to find any flaws to the model, and trying to see if more training needed to be done. From our tests, we noticed that although the model detect the drone when it was close to the camera or when it was alone in the sky, it really struggled with a lot of background noise, especially with trees and its leaves. Therefore, we decided to retrain the model for 100 more epochs, and also increased the quality of the images that the model was training on to 1024x1024 pixels. This required a lot of computational power, so we paid for the Google Colab Pro model to train our model quicker. After training, the YOLOv8 model was deployed for real-time detection. The model was applied to video feeds, where it analyzed each frame individually, drawing bounding boxes around detected drones and accurately distinguishing them from other elements in the background. However, due to our lack of GPU in the laptop we were testing it on, our frame rate was significantly reduced when trying to run the trained YOLOv8x model, so we had to accommodate for this by running the model every 3 frames. This capability made the model well-suited for real-time applications, such as live drone feed monitoring or video surveillance. The resulting real-time drone detection model achieved high accuracy, especially when the entire outline of the drone was visible, even if it was surrounded by other objects, like trees, houses or clouds. Moreover, we were able to consistently detect a drone when it was 30 meters away in the sky. However, this distance testing failed when the drone was camouflaged by buildings or by trees, but we made up for it using the acoustic methods as described above.
Sensor Fusion methodology
From literary research we found five sensor fusion algorithms that could be relevant to our detection system. These being:
- Extended Kalman Filter
- Convolutional Neural Networks
- Bayesian Networks
- Decision-Level Fusion with Voting Mechanisms
- Deep Reinforcement Learning
The first three listed here we will not be able to apply properly with only a vision-based and an acoustic sensing component. The fifth algorithms will not be possible with our recourses and time limit of the project. Therefore we decided to take a deeper look into Decision-Level Fusion with Voting Mechanisms, and how we could apply this to our tests.
We looked into three different ways of applying this method. The first one being the following, we state there is a drone near if one of the two sensors claimed to have detected a drone. The second one being that we state there is a drone near if both sensors claimed to have detected a drone. Lastly, if one of the sensor claimed there to be a drone, we lowered the threshold for the other sensor, and we required double confirmation from the sensors.
With the first method we found there to be a lot of true negative results, implying that the system would indicate there is a drone near, while this is not the case. This was because our vision-based component was not very accurate.
For the second method we found that it worked better than the first method, but only in specific scenarios. If the drone was far away, then the camera could pick it up, but the microphone simply did not have the range.
The third method gives a balance between these. It does not give false positives as it still needs confirmation from all sensors, but also not as many true negatives as it does not only rely on a single sensor entirely.
However, this still would result in some false positive at times, consider the situation with the drone far away, therefore to adapt it in a better way for our system, we should analyse from the sensors whether another sensor is applicable in the situation. Therefore, if for example the visual-based component detects a drone, and it detects that the drone is a certain amount of meters away, then the acoustic component would be considered invaluable and taken out of the decision making. This is the case because if a drone were to be far away we would know from tested specifications that the microphone can actually detect a drone only up to a certain range, therefore if the camera were to pick up a drone far away, we know that the microphone will never detect this drone, and therefore the component data is invaluable and will not be considered. To determine this exact distance depends on the exact components that would be used, and it would require more testing to determine this accurately. This would work best when actually more than two components would be used, because otherwise in some cases the third method would become the first method which results in a lot of true negatives, which we would like to avoid.
Future Work
Expanding this project could focus on several key areas that enhance functionality, and reliability. Someone that were to expand this project could look into the following areas.
Detection of Projectiles and Drone/Projectile trajectory (speed & direction)
We have done research into tracking the path of a drone, and have found existing project that tracks the motion of a drone using a YOLOv8 model, so extending this project to that should not take too much effort. This should also be followed by an experimental analysis of the location and distance of the drone in real life compared to its size and location in the video feed, such that the detection and interception methods can be merged.
Improvement of Detection Accuracy through Improved Sensor Fusion
We have done research into the different components could be added to the system in regard to drone detection. We tested with two of these but not all, therefore future work could test the other components aswell. Additionally, more testing could be done on sensor fusion when all components are into play.
Testing Interception Method
We have done research into different interception methods, and also how a net could be used to intercept projectiles. Future work could test intercepting objects using a net, determine whether our calculations are correct, how to make a device that holds a net, how to shoot it using the system and how to reload the net into the system.
Field Testing in Diverse Environments
It is important to analyse whether our system would perform as we expect it to do in different environment. Therefore future work could be regarding testing how dynamic the system is and how to improve it to be better at adapting to different environment.
Literary Research
Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not[42]
A significant challenge with autonomous systems is ensuring compliance with international laws, particularly IHL. The paper delves into how such systems can be designed to adhere to humanitarian law and discusses critical and optional features such as the capacity to identify combatants and non-combatants effectively. This is directly relevant to ensuring our system's utility in operational contexts while adhering to ethical norms.
Artificial Intelligence Applied to Drone Control: A State of the Art[11]
This paper explores the integration of AI in drone systems, focusing on enhancing autonomous behaviors such as navigation, decision-making, and failure prediction. AI techniques like deep learning and reinforcement learning are used to optimize trajectory, improve real-time decision-making, and boost the efficiency of autonomous drones in dynamic environments.
Drone Detection and Defense Systems: Survey and Solutions[8]
This paper provides a comprehensive survey of existing drone detection and defense systems, exploring various sensor modalities like radio frequency (RF), radar, and optical methods. The authors propose a solution called DronEnd, which integrates detection, localization, and annihilation functions using Software-Defined Radio (SDR) platforms. The system highlights real-time identification and jamming capabilities, critical for intercepting drones with minimal collateral effects.
Advances and Challenges in Drone Detection and Classification[10]
This state-of-the-art review highlights the latest advancements in drone detection techniques, covering RF analysis, radar, acoustic, and vision-based systems. It emphasizes the importance of sensor fusion to improve detection accuracy and effectiveness.
Autonomous Defense System with Drone Jamming capabilities[43]
This patent describes a drone defense system comprising at least one jammer and at least one radio detector. The system is designed to send out interference signals that block a drone's communication or GPS signals, causing it to land or return. It also uses a technique where the jammer temporarily interrupts the interference signal to allow the radio detector to receive data and locate the drone's position or intercept its control signals.
Small Unmanned Aerial Systems (sUAS) and the Force Protection Threat to DoD[44]
This article discusses the increasing threat posed by small unmanned aerial systems (sUAS) to military forces, particularly the U.S. Department of Defense (DoD). It highlights how enemies are using these drones for surveillance and delivery of explosives.
The Rise of Radar-Based UAV Detection For Military: A Game-Changer in Modern Warfare[9]
This article discusses how radar-based unmanned aerial vehicle (UAV) detection is transforming military operations. SpotterRF’s systems use advanced radar technology to detect drones in all conditions, including darkness or bad weather. By integrating AI, these systems can distinguish between drones and non-threats like birds, improving accuracy and reducing false positives.
Swarm-based counter UAV defense system[45]
This article discusses autonomous systems designed to detect and intercept drones. It emphasizes the use of AI and machine learning to improve the real-time detection, classification, and interception of drones, focusing on autonomous UAVs (dUAVs) that can neutralize threats. The research delves into algorithms and swarm-based defense strategies that optimize how drones are intercepted.
Small Drone Threat Grows More Complex, Deadly as Tech Advances[46]
The article highlights the growing threat of small UAV to military operations. It discusses how these systems are used by enemies for surveillance and direct attacks, and the various countermeasures the U.S. Department of Defense is developing to stop these attacks. It eplores the use of jamming (interference of connection between drone and controller), radio frequency sensing, and mobile detection systems.
US Army invents 40mm grenade that nets bad drones[47]
This article discusses recently developed technology that involves a 40mm grenade that deploys a net to capture and neutralise hostile drones. This system can be fired from a standard grenade launcher, providing a portable, low-cost method of taking down small unmanned aerial systems (sUAS) without causing significant collateral damage.
Making drones to kill civilians: is it ethical?[48]
Usually, anything where harm is done to innocent people is seen as unethical. This would mean that every company which is somehow providing for items in war would do something which is at least partially unethical. However, during war an international law states that a country is not limited by all traditional ethics. This makes deciding on what is ethical and what not harder.
Sociocultural objections to using killer drones against civilians:
- Civilians (not in war) are getting killed by drones since the drones are not able to see the difference between people in war and people not in war
- We should not see war as a ‘clash of civilizations’ as this would induce that civilians are also part of war
Is it ethical to use drones to kill civilians?:
- As said above, an international law applies during war between countries. This law implies:
o Killing civilians = murder = prohibited
- People getting attacked by drones, say that it is not the drones who kill people, but people kill people
A simple solution is to follow the 3 laws of robotics from Isaac Asimov:
- A robot may not injure a human being or allow a human being to come to harm
- A robot must obey orders given to it by human beings except when such orders conflict with the first law
- A robot must protect its own existence, provided such protection does not conflict with the first or second law
But following these laws would be too simple, as these laws are not actual laws
The current drone killer’s rationale:
- A person is targeted only if harmful to the interests of this country so lang as he/she remains alive
This rationale is objected by:
- This rationale is simply assumed since the international law says nothing about random targeting of individuals
This objection is disproved by:
- If the target is not in warzone, it is not harmful to the interests of the country, thus such a person would not be a random person
Is it legal and ethical to produce drones that are used to kill civilians?:
A manufacturer of killer drones may not assume its drones are only being used peacefully.
The manufacturers of killer drones often have cautionary recommendations, which are there to put these manufacturers in a legally safe place.
Conclusion:
The problem is that drone killing is not covered in the traditional war laws. Literature is not uniform in opposition to drone killing, but the majority states that killing civilians is unethical.
Ethics, autonomy, and killer drones: Can machines do right?[49]
The article looks into the ethics of certain weapons used in war (in the US). Since we can view back on new weapons back then in war (like atomic bombs) we can see if what they thought then, is what we think now is ethical. The article uses two different viewpoints to decide the ethics of a war technology, namely teleology and deontology. Teleology is focused on the outcome of an action, while deontology focusses more on the duty of an action.
The article looks first at the atomic bomb, which according to a teleologic viewpoint could be seen as ethical, as it would bring an end to war quickly which saves lives in the long term. Deontology also says it could be ethical since it would show superiority to have such strong weapons, which intimidates other countries in war.
Next up in discussion in a torture program. According to teleology this is an ethical thing to do, since torturing some people, to extract critical information from them could be used to prevent more deaths in the future.
Now the article questions AI-enabled drones. For AI ethics, the AI should always be governed by humans, bias should be reduced (lots of civilians are getting killed now) and there should be more transparency. As for a country this is more challenging since they also have to focusses on safety and winning a war. This is why, in contrast to with the atomic bomb, where teleology and deontology said the same, there now is a contrast between teleology and deontology. Teleology wants to focus on outcome, thus security and protection. Deontology focusses on global values, like human rights. The article says the challenge is to use AI technologies effective while following ethical principles and letting everyone do this.
Survey on anti-dron systems: components, designs, and challenges[50]
Requirements an anti-drone system must have:
- Drone specialized detection (detect the drone)
- Multi drone defensibility (Defend for multiple drones)
- Cooperation with security organizations (Restrictions to functionality should be discussed with public security systems (police/military)
- System portability (lightweight and use wireless networks)
- Non-military neutralization (Don’t use military weapons to defend for drones)
Ways to detect drones:
- Thermal detection (Motors, batteries and internal hardware produce heat)
o Works in all weather
o Affordable
o Not too much range
- RF scanner (capture wireless signals)
o Can’t detect drones what don’t produce RF signals
o Long range
- Radar detection (Detect objects and determine the shape)
o Long range
o Can’t see the drone if it is not moving since it thinks it is an obstacle
- Optical camera detection (detect from a video)
o Short range
o Weather dependant
Hybrid detection systems to detect drones
- Radar + vision
- Multiple RF scanners
- Vision + acoustic
Drone neutralization:
- Hijacking/spoofing (Create fake signal to prevent drone from moving)
- Geofencing (Prevent drone from approaching a certain point)
- Jamming (Stopping radio communication between drone and controller)
- Killer drones (Using drones to damage attacking drones)
- Capture (Physically capture a drone) (for example with a net)
o Terrestrial capture systems (human-held or vehicle-mounted)
o Aerial capture systems (System on defender drones)
Determination of threat level:
- Object
- Flight path
- Available time
(NOTE: The article goes into more depth about some mathematics to determine the threat level, which could be used in our system)
Artificial intelligence, robotics, ethics, and the military: a Canadian perspective[51]
The article not only looks at the ethics, but also the social and legal aspects of using artificial intelligence in the military. For this it looks at 3 main aspects of AI, namely Accountability and Responsibility, Reliability and Trust.
Accountability and Responsibility:
The article states that the accountability and responsibility of the actions of an AI system are for the operator, which is a human. However, when the AI malfunctions it becomes challenging to determine who is accountable.
Reliability:
AI now is not reliable enough and only performs well in very specific situations where it is made for. During military usage you never know in what situation an AI will be in, thus causing a lack in reliability. A verification of AI technologies is necessary, especially when you are dealing with live and death of humans.
Trust:
People who use AI in military should be thought how the AI works and to what extend they can trust the AI. Too much or too little trust in AI can lead to big mistakes. The makers of these AI systems should be more transparent so it can be understood what the AI does.
We need to have a proactive approach to minimize the risks we have with AI. This means that everyone who uses or is related to AI in military should carefully consider the risks that AI brings.
When AI goes to war: Youth opinion, fictional reality and autonomous weapons[52]
The article looks into the responsibilities and risks of fully autonomous robots in war. It does this by asking youth participants about this together with other research and theory.
The article found that the participants felt that humans should be responsible for actions of autonomous robots. This can be supported by theory which says that since robots do not have emotions like humans do, they cannot be responsible for their actions in the same way as humans. If autonomous robots were programmed with some ethics in mind, the robot could in someway be accounted for its actions as well. How this responsibility between humans and robots should be divided became unclear in this article. Some said responsibility was purely for the engineers, designs and government, while others said that the human and robot had a shared responsibility.
The article also found that there were still fears for fully autonomous robots. This came from old myths and social media which say that autonomous robots can turn against humans to destroy them.
As for the legal part of autonomous robots, they can potentially violate laws during war, especially if they are not accounted responsible for their actions. This causes worries for the youth.
The threats that fully autonomous robots bring outweigh the benefits for the youth. This is a sign for the scientific community to further develop and implement norms and regulations in autonomous robots.
Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review[53]
Summary:
•The paper provides a comprehensive review of drone detection and classification technologies. It delves into the various methods employed to detect and identify drones, including radar, radio frequency (RF), acoustic, and vision-based systems. Each has their strengths and weaknesses, after which the author discusses 'sensor fusion', where the combination of detection methods lead to improvements of system performance and robustness.
Key takeaways:
•Sensor fusion should be incorporated into system to improve performance and robustness
Counter Drone Technology: A Review[54]
Summary:
•The article provides a comprehensive analysis of current counter-drone technologies and categorizes counter-drone systems into three main groups: detection, identification, and neutralization technologies. Detection technologies include radar, RF detection, etc. Once a drone is detected, it must be identified as friend or foe. The review discusses methods such as machine learning algorithms and signature signal libraries. It covers various neutralization methods, including jamming (RF and GPS), laser-based systems, and kinetic solutions like nets or projectiles, and the challenges each method faces.
Key takeaways:
•Integration of multiple sensor technologies is critical
•Non-kinetic neutralization methods should be prioritized where possible to avoid unintended consequences
A Soft-Kill Reinforcement Learning Counter Unmanned Aerial System (C-UAS) with Accelerated Training[55]
Summary:
•This article discusses the development of a counter-drone system that utilizes reinforcement learning using non-lethal (“soft-kill") techniques. The system is designed to learn and adapt to various environments and drone threats using simulated environments.
Key takeaways:
•C-UAS systems must be rapidly deployable
•C-UAS systems should be trained in simulated environments to improve robustness and adaptability
Terrorist Use of Unmanned Aerial Vehicles: Turkey's Example[56]
Summary:
•The article examines how terrorist organizations have utilized drones for surveillance, intelligence gathering, and attacks. It highlights the growing accessibility of consumer drones, which are repurposed for malicious use and various counter-UAV technologies and tactics employed by Turkish forces to mitigate this threat.
Key takeaways:
•Running costs must be kept minimal. Access to affordable drones is widespread.
•Both kinetic and non-kinetic interception must be available if the system is to be used in urban or otherwise populated environments
Impact-point prediction of trajectory-correction grenade based on perturbation theory[57]
Summary:
•The article discussed trajectory prediction and correction methods for the use case of improving the accuracy of artillery projectiles. By modeling and simulating the effects of small perturbations in projectile flight, the study proposes an impact-point prediction algorithm. While this algorithm can be applied to improving artillery accuracy, it could potentially be used to predict the trajectory and impact location of drone-dropped explosives.
Key takeaways:
•Detailed description of real-time trajectory prediction corrections
•Challenge to balance efficiency with accuracy in path prediction algorithms
Armed Drones and Ethical Policing: Risk, Perception, and the Tele-Present Officer[58]
This paper talks about the tele-officier on ‘unmanned drones’. This paper looks at it from the point of view of attacking, but it can be looked at from the point of view of ‘attacking’ incoming drones where still a person should or should not ‘pull the trigger’ to intercept a drone, with the potential risks of redirecting it at another crowd.
The Ethics and Legal Implications of Military Unmanned Vehicles[59]
This papers states that human soldiers/marines also do not agree on what is ethical warfare. They give a few examples on which questions have controversial answers under the soldiers/marines. (we may use this to argue why/why not our device should be autonomous or not.
Countering the drone threat implications of C-UAS technology for Norway in an EU an NATO context[60]
This paper gives clear insight in different scenarios where drones can be a threat. For example on large crowds but also in warfare. This paper does however not give a concrete solution.
An anti-drone device based on capture technology[61]
This paper explores the capabilities of capturing a drone with a net. It also addresses some other forms of anti drone devices, such as lasers, hijacking, rf jamming…
For the rest is this paper very technical in the net captering.
Four innovative drone interceptors.[62]
This paper states 5 different ways of detecting drones. Acoustic detection and tracking with microphones positioned in a particular grid, video detection by cameras, thermal detection, radar detection and tracking and as last the detection through radio emissions from the drone. Because we want to also be able to catch ‘off-the-shelf’ drones we have to investigate which ones are appropriate. For taking down the drone they give 6 options: missile launch, radio jamming, net throwers, machine guns, lasers, drone interceptors. The 4 drone interceptors they introduce are for us a bit above budget, as they are real drones with various kinds of generators to take down a drone (for example with a high electric pulse), but we could still look into this.
Comparative Analysis of ROS-Unity3D and ROS-Gazebo for Mobile Ground Robot Simulation[63]
This paper examines the use of Unity3D with ROS versus the more traditional ROS-Gazebo for simulating autonomous robots. It compares their architectures, performance in environment creation, resource usage, and accuracy, finding that ROS-Unity3D is better for larger environments and visual simulation, while ROS-Gazebo offers more sensor plugins and is more resource-efficient for small environments.
Logbook
Name | Total | Break-down |
---|---|---|
Max van Aken | 10h | Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [21], [22], [23], [24], [25] (5h), Summarized and described key takeaways for papers /patents [21], [22], [23], [24], [25] (1h) |
Robert Arnhold | 16h | Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [11], [12], [13], [14], [15] (10h), Summarized and described key takeaways for papers/patents [11], [12], [13], [14], [15] (2h) |
Tim Damen | 16h | Attended lecture (2h), Attended meeting with group (2h), Analysed papers [12], [13], [14], [15], [16] (10h), Summarized and described key takeaways for papers [12], [13], [14], [15], [16] (2h) |
Ruben Otter | 17h | Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [1], [2], [3], [4], [5] (10h), Summarized and described key takeaways for papers/patents [1], [2], [3], [4], [5] (2h), Set up Wiki page (1h) |
Raul Sanchez Flores | 16h | Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [6], [7], [8], [9], [10] (10h), Summarized and described key takeaways for papers/patents [6], [7], [8], [9], [10] (2h), |
Name | Total | Break-down |
---|---|---|
Max van Aken | 13h | Attended lecture (30min), Attended meeting with group (1h), Research kinds of situations of device (2h), wrote about situations (1,5h), research ethics (6h), write ethics (2h) |
Robert Arnhold | 6.5h | Attended lecture (30min), Attended meeting with group (1h), Worked on interview questions (1h), Organizing introductory interview (2h), Preparing interviews for next weeks(2h) |
Tim Damen | 13.5h | Attended lecture (30min), Attended meeting with group (1h), Risk evaluation (2h), Important features (1h), Research on ethics of deflection (8h), Writing part about deflection (1h) |
Ruben Otter | 14.5h | Attended lecture (30min), Attended meeting with group (1h), Analysed papers [2], [3], [4], [5] for research in drone detection (6h), Wrote about drone detection and its relation to our system using papers [2], [3], [4], [5] (4h), Analysed and summarized paper [6] (2h), Wrote about usage of simulation and its software (1h) |
Raul Sanchez Flores | 14.5h | Attended lecture (30min), Attended meeting with group (1h) Researched and analysed papers about approaches to drone interception (4h) Researched and analysed papers about drone interception (6h), Evaluated different existing drone interception (3h) |
Name | Total | Break-down |
---|---|---|
Max van Aken | 6h | Attended lecture (30min), Meeting with group (1.5h), researched ethics (2h), rewriting ethics (2h) |
Robert Arnhold | 13h | Attended lecture (30min), Meeting with group (1.5h), Completed interview (4h), planning next interview (1h), conceptualizing mechanical-side (3h), state-of-the-art research and review of previously prepared sources (3h) |
Tim damen | 10h | Attended lecture (30min), Meeting with group (1.5h), Analysed papers [6], [7], [8], [9] [10], [11] (6h), Rewrote ethics of deflection based on autonomous cars (2h) |
Ruben Otter | 10h | Attended lecture (30min), Meeting with group (1.5h), Research into possible drone detection components (6h), Created list of possible prototype components (2h) |
Raul Sanchez
Flores |
9.5h | Attended lecture (30min), Meeting with group (1.5h) Make interception methods comparison more in-depth (5h) Created Pugh Matrix to compare different interception methods (2h) Email Ruud about components we need (30min) |
Week 4
Name | Total | Break-down |
---|---|---|
Max van Aken | 11h | Attended lecture (30min), Meeting with group (1.5h), researched default drone specifications (2h) done calculation on mass prediction (6h), mathematica calculations (1h) |
Robert Arnhold | 13h | Attended lecture (30min), Meeting with group (1.5h), investigating mechanical design concepts and documenting process (7h), summarizing discussion and learnings (2h, not yet finished), contacting new interviewees (2h) |
Tim damen | 17h | Attended lecture (30min), Meeting with group (1.5h), Analysed 4 papers (4h), Research on how to catch a projectile with a net (6h), Written text on how to catch a projectile with a net (2h), Worked on a Mathematica script for calculations to catch a projectile (3h) |
Ruben Otter | 15h | Attended lecture (30min), Meeting with group (1.5h), Attended meeting with Ruud (1h), Setup Raspberry Pi (2h), Research into audio sensors (2h), Started coding and connection audio sensor with Raspberry Pi(8h) |
Raul Sanchez
Flores |
12h | Attended lecture (30min), Meeting with group (1.5h), Attended meeting with Ruud (1h), Write Specifications for our design (5h) Format Table of Requirements (1h) Begin writing testing methods for each specifications (3h) |
Week 5
Max van Aken | 11h | Attended lecture (30min), Meeting with group (1.5h), spring calculations (4h), worked on microphone code (5h) |
Robert Arnhold | 11h | Attended lecture (30min), Meeting with group (1.5h), discussing schedule with potential new interviewee, developing mechanical design (2h), summarizing focus and key learnings from previous interview (2h), investigating mechanical components, materials, structures, and other design features (5h) |
Tim damen | 17h | Attended lecture (30min), Meeting with group (1.5h), Finalize Mathematica to catch projectile in 2D (1h), Research on catching projectile in 3D (4h), Work on Mathematica to catch projectile in 3D (6h), Analyses on accuracy of 2D model (2h), Writing the text for on the wiki (1h), General research to write text and do calculations (1h) |
Ruben Otter | 11h | Attended lecture (30min), Meeting with group (1.5h), Continued integrating microphone sensor with Raspberry Pi (9h) |
Raul Sanchez
Flores |
17h | Attended lecture (30min), Meeting with group (1.5h) Finished writing testing methods (5h) Research into YOLOv8 and watching a video of its usage (4h) Find appropriate training and validity drone datasets (3h) Begin implementing python code to train drone datasets using YOLOv8 (3h) |
Week 6
Max van Aken | 11h | Attended lecture (30min), Meeting with group (1.5h), begin with presentation/text (6h), further developed microphone code (3h) |
Robert Arnhold | 10h | Attended lecture (30min), Meeting with group (1.5h), performing second interview with volunteer in Ukraine (2h), collecting and structuring notes for review (2h), continuing mechanical design research and formalizing CAD design (4h) |
Tim damen | 8h | Attended lecture (30min), Meeting with group (1.5h), Worked on creating the 3D model in Mathematica (5h), written part about the 3D model (1h) |
Ruben Otter | 11h | Attended lecture (30min), Meeting with group (1.5h), Dowloaded and learned MatLab syntax (3h), Research into different types of drones (2h), Finding drone sound files for specific drones (1h), Coded in MatLab to analyse frequency of the sample drone sound (3h) |
Raul Sanchez
Flores |
10.5h | Attended lecture (30min), Meeting with group (1.5h) Train YOLOv8 model on a drone dataset (1.5h) Write email to Ruud to borrow drone (0.5h) Meeting with Ruud to see if we can borrow drone (0.5h) Write emails to Duarte Guerreiro Tomé Antunes to borrow drone (0.5h) Look for other places to borrow a drone (1.5h) Test YOLOv8 model on drone videos in the internet(4h) |
Week 7
Max van Aken | 8h | Attended lecture (30min), Meeting with group (1.5h), writing text presentation (2h), testing code to convert output to chance (4h) |
Robert Arnhold | 12h | Attended lecture (30min), Meeting with group (1.5h), summarizing secondary interview notes into key takeaways and confirmed specifications (3h), completing CAD design for given specifications (2h), 3D printing main body of system (4h) and constructing body of prototype (1h) |
Tim damen | 12.5h | Attended lecture (30min), Meeting with group (1.5h), Research on type of projectiles, dimensions, effects of air resistance on these projectiles and on net (4h), Adapt model with wind (0.5h), Specified assumptions made based on research done (2h), Tested accuracy of 3D model (2h), Written text about accuracy of 3D model (1h), Cleared up some parts on the wiki page (1h) |
Ruben Otter | 16h | Attended lecture (30min), Meeting with group (1.5h), Continue coding in MatLab to analyse the frequency of the sample drone sound and comparing these with other sound files (4h), Research into Sensor Fusion (3h), Apply Sensor Fusion methodology on the test results(2h), Create presentation(2h), Prepare for presentation(3h) |
Raul Sanchez
Flores |
14.5h | Attended lecture (30min), Meeting with group (1.5h) Implement real-time drone detection using camera and YOLOv8 (4h) Re-test YOLOv8 model using a larger training dataset with higher resolution images (3h) Fly drone outside, record it, and test the real-time detection using camera (1.5h) Test new trained YOLOv8 model on recorded videos, and cut them for the presentation (4h) |
Week 8
Max van Aken | 7h | Attended lecture (30min), Meeting with group (1.5h), writing/reading wiki (5h) |
Robert Arnhold | 12h | Attended lecture (30min), Meeting with group (1.5h), Completed mechanical design (5h), Formalized interview notes for wiki (2h), Wrote up wiki sections (3h) |
Tim damen | 7h | Attended lecture (30min), Meeting with group (1.5h), Change order on wiki and improve some small parts (1h), Finished up last pieces of my text (1h), Improved the photos (1h), Final review wiki (2h) |
Ruben Otter | 10.5h | Attended lecture (30min), Meeting with group (1.5h), Prepare for presentation(1h), Present the presentation(30min), Some additional research into Sensor Fusion(2h), Write part about sensor fusion from a theoretical point of view(2h), Write part about how we applied certain sensor fusion methodology in our testing(2h), Write future work section(1h) |
Raul Sanchez
Flores |
12h | Attended lecture (30min), Meeting with group (1.5h) Add specification justification section to the wiki (4h) Write about the visual-based drone detection in the wiki (4h) Add citations to my sections in the wiki (2h) |
References
- ↑ How to read a risk matrix used in a risk analysis (assessor.com.au)
- ↑ Prof. Bharath Bharadwaj B S1,Apeksha S2, Bindu NP3, S Shree Vidya Spoorthi 4, Udaya S5 (2023). A deep learning approach to classify drones and birds, https://www.irjet.net/archives/V10/i4/IRJET-V10I4263.pdf
- ↑ Making Drones to Kill Civilians: Is it Ethical? | Journal of Business Ethics (springer.com)
- ↑ Stanford Encyclopedia of philosophy, Virtue Ethics. https://plato.stanford.edu/entries/ethics-virtue/
- ↑ Peter Graham, Thomson's Trolley problem, DOI: 10.26556/jesp.v12i2.227
- ↑ Ethics unwrapped, Deontology https://ethicsunwrapped.utexas.edu/glossary/deontology#:~:text=Deontology%20is%20an%20ethical%20theory%20that%20uses%20rules%20to%20distinguish,Don't%20cheat.%E2%80%9D
- ↑ Utilitarianism meaning, Cambridge dictionary https://dictionary.cambridge.org/dictionary/english/utilitarianism
- ↑ 8.00 8.01 8.02 8.03 8.04 8.05 8.06 8.07 8.08 8.09 8.10 8.11 Chiper F-L, Martian A, Vladeanu C, Marghescu I, Craciunescu R, Fratu O. Drone Detection and Defense Systems: Survey and a Software-Defined Radio-Based Solution. Sensors. 2022; 22(4):1453. https://doi.org/10.3390/s22041453
- ↑ 9.00 9.01 9.02 9.03 9.04 9.05 9.06 9.07 9.08 9.09 9.10 9.11 The rise of Radar-Based UAV Detection for Military: A Game-Changer in Modern Warfare. (2024, June 11). Spotter Global. https://www.spotterglobal.com/blog/spotter-blog-3/the-rise-of-radar-based-uav-detection-for-military-a-game-changer-in-modern-warfare-8
- ↑ 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 10.11 10.12 10.13 10.14 10.15 10.16 10.17 10.18 10.19 10.20 10.21 10.22 10.23 10.24 10.25 10.26 10.27 10.28 10.29 10.30 10.31 10.32 10.33 10.34 10.35 10.36 Seidaliyeva U, Ilipbayeva L, Taissariyeva K, Smailov N, Matson ET. Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. Sensors. 2024; 24(1):125. https://doi.org/10.3390/s24010125
- ↑ 11.0 11.1 11.2 11.3 11.4 Caballero-Martin D, Lopez-Guede JM, Estevez J, Graña M. Artificial Intelligence Applied to Drone Control: A State of the Art. Drones. 2024; 8(7):296. https://doi.org/10.3390/drones8070296
- ↑ 12.0 12.1 12.2 12.3 12.4 12.5 12.6 12.7 Svanström, F.; Alonso-Fernandez, F.; Englund, C. Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities. Drones 2022, 6, 317. https://doi.org/10.3390/drones6110317
- ↑ 13.0 13.1 13.2 13.3 13.4 Samaras, S.; Diamantidou, E.; Ataloglou, D.; Sakellariou, N.; Vafeiadis, A.; Magoulianitis, V.; Lalas, A.; Dimou, A.; Zarpalas, D.; Votis, K.; et al. Deep Learning on Multi Sensor Data for Counter UAV Applications—A Systematic Review. Sensors 2019, 19, 4837. https://doi.org/10.3390/s19224837
- ↑ 14.0 14.1 14.2 Coyote UAS. (n.d.). https://www.rtx.com/raytheon/what-we-do/integrated-air-and-missile-defense/coyote
- ↑ 15.0 15.1 DroneGun MK3: Counterdrone (C-UAS) Protection — DroneShield. (n.d.). Droneshield. https://www.droneshield.com/c-uas-products/dronegun-mk3
- ↑ 16.0 16.1 16.2 16.3 Michel, A. H., The Center for the Study of the Drone at Bard College, Aasiyah Ali, Lynn Barnett, Dylan Sparks, Josh Kim, John McKeon, Lilian O’Donnell, Blades, M., Frost & Sullivan, Peace Research Institute Oslo, Norwegian Ministry of Defense, Open Society Foundations, Pvt. James Newsome, & Senior Airman Kaylee Dubois. (2019). COUNTER-DRONE SYSTEMS (D. Gettinger, Isabel Polletta, & Ariana Podesta, Eds.; 2nd Edition). https://dronecenter.bard.edu/files/2019/12/CSD-CUAS-2nd-Edition-Web.pdf
- ↑ 17.0 17.1 Hecht, J. (2021, June 24). Liquid lasers challenge fiber lasers as the basis of future High-Energy weapons. IEEE Spectrum. https://spectrum.ieee.org/fiber-lasers-face-a-challenger-in-laser-weapons
- ↑ 18.0 18.1 DroneHunter® F700. (2024, October 24). Fortem Technologies. https://fortemtech.com/products/dronehunter-f700/
- ↑ 19.0 19.1 DJI FlySafe. (n.d.). https://fly-safe.dji.com/nfz/nfz-query
- ↑ 20.0 20.1 Iron Beam laser weapon, Israel. (2023, November 1). Army Technology. https://www.army-technology.com/projects/iron-beam-laser-weapon-israel/
- ↑ - Autonomous Ball Catcher Part 1: Hardware — Baucom Robotics
- ↑ - Ball Detection and Tracking with Computer Vision - InData Labs
- ↑ - Detecting Bullets Through Electric Fields – DSIAC
- ↑ - An Introduction to BYTETrack: Multi-Object Tracking by Associating Every Detection Box (datature.io)
- ↑ - Online Trajectory Generation with 3D camera for industrial robot - Trinity Innovation Network (trinityrobotics.eu)
- ↑ - Explosives Delivered by Drone – DRONE DELIVERY OF CBNRECy – DEW WEAPONS Emerging Threats of Mini-Weapons of Mass Destruction and Disruption ( WMDD) (pressbooks.pub)
- ↑ - Deadliest weapons: The high-explosive hand grenade (forcesnews.com)
- ↑ - SM2025.pdf (myu-group.co.jp)
- ↑ - Trajectory estimation method of spinning projectile without velocity input - ScienceDirect
- ↑ - An improved particle filtering projectile trajectory estimation algorithm fusing velocity information - ScienceDirect
- ↑ - (PDF) Generating physically realistic kinematic and dynamic models from small data sets: An application for sit-to-stand actions (researchgate.net)
- ↑ https://kestrelinstruments.com/mwdownloads/download/link/id/100/
- ↑ Normal and Tangential Drag Forces of Nylon Nets, Clean and with Fouling, in Fish Farming. An Experimental Study (mdpi.com)
- ↑ A model for the aerodynamic coefficients of rock-like debris - ScienceDirect
- ↑ Explosives Delivered by Drone – DRONE DELIVERY OF CBNRECy – DEW WEAPONS Emerging Threats of Mini-Weapons of Mass Destruction and Disruption ( WMDD) (pressbooks.pub)
- ↑ Influence of hand grenade weight, shape and diameter on performance and subjective handling properties in relations to ergonomic design considerations - ScienceDirect
- ↑ 'Molotov Cocktail' incendiary grenade | Imperial War Museums (iwm.org.uk)
- ↑ My Global issues - YouTube
- ↑ 4 Types of Distance Sensors & How to Choose the Right One | KEYENCE America
- ↑ Ukrainian Mountain Battalion drop grenades on Russian forces with weaponised drone (youtube.com)
- ↑ Across, M. (2024, September 27). Surviving 25,000 Miles Across War-torn Ukraine | Frontline | Daily Mail. YouTube. https://youtu.be/kqKGYn13MeM?si=VPhO7jFG0sHQQXiW
- ↑ Willy, Enock, Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not (NOVEMBER 16, 2020). Available at SSRN: https://ssrn.com/abstract=3867978 or http://dx.doi.org/10.2139/ssrn.3867978
- ↑ Chmielus, T. (2024). Drone defense system (U.S. Patent No. 11,876,611). United States Patent and Trademark Office. https://patentsgazette.uspto.gov/week03/OG/html/1518-3/US11876611-20240116.html
- ↑ Kovacs, A. (2024, February 1). Small Unmanned aerial Systems (SUAS) and the force protection threat to DOD. RMC. https://rmcglobal.com/small-unmanned-aerial-systems-suas-and-the-force-protection-threat-to-dod/
- ↑ Brust, M. R., Danoy, G., Stolfi, D. H., & Bouvry, P. (2021). Swarm-based counter UAV defense system. Discover Internet of Things, 1(1). https://doi.org/10.1007/s43926-021-00002-x
- ↑ Small drone threat grows more complex, deadly as tech advances. (n.d.). https://www.nationaldefensemagazine.org/articles/2023/8/30/small-drone-threat-grows-more-complex-deadly-as-tech-advances
- ↑ Technology for innovative entrepreneurs & businesses | TechLink. (n.d.). https://techlinkcenter.org/news/us-army-invents-40mm-grenade-that-nets-bad-drones
- ↑ Making Drones to Kill Civilians: Is it Ethical? | Journal of Business Ethics (springer.com)
- ↑ Full article: Ethics, autonomy, and killer drones: Can machines do right? (tandfonline.com)
- ↑ IEEE Xplore Full-Text PDF:
- ↑ https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2848
- ↑ When AI goes to war: Youth opinion, fictional reality and autonomous weapons - ScienceDirect
- ↑ Seidaliyeva, U., Ilipbayeva, L., Taissariyeva, K., Smailov, N., & Matson, E. T. (2024). Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. Sensors, 24(1), 125. https://doi.org/10.3390/s24010125
- ↑ Gonzalez-Jorge, Higinio & Aldao, Enrique & Fontenla-Carrera, Gabriel & Veiga Lopez, Fernando & Balvís, Eduardo & Ríos-Otero, Eduardo. (2024). Counter Drone Technology: A Review. 10.20944/preprints202402.0551.v1.
- ↑ Silva, Douglas & Machado, R. & Coutinho, Olympio & Antreich, Felix. (2023). A Soft-Kill Reinforcement Learning Counter Unmanned Aerial System (C-UAS) with Accelerated Training. IEEE Access. PP. 1-1. 10.1109/ACCESS.2023.3253481.
- ↑ Şen, Osman & Akarslan, Hüseyin. (2020). Terrorist Use of Unmanned Aerial Vehicles: Turkey's Example.
- ↑ Wang, Yu & Song, W.-D & Song, X.-E & Zhang, X.-Q. (2015). Impact-point prediction of trajectory-correction grenade based on perturbation theory. 27. 18-23.
- ↑ Armed Drones and Ethical Policing: Risk, Perception, and the Tele-Present Officer https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8367046/ Published online 2021 Jun 19. doi: 10.1080/0731129X.2021.1943844
- ↑ The Ethics and Legal Implications of Military Unmanned Vehicles,y Elizabeth Quintana, Head of Military Technology & Information Studies Royal United Services Institute for Defence and Security Studies https://static.rusi.org/assets/RUSI_ethics.pdf
- ↑ Countering the drone threat implications of C-UAS technology for Norway in an EU an NATO context https://www.researchgate.net/profile/Bruno-Martins-4/publication/348189950_Countering_the_Drone_Threat_Implications_of_C-UAS_technology_for_Norway_in_an_EU_and_NATO_context/links/5ff3240492851c13feeb0e08/Countering-the-Drone-Threat-Implications-of-C-UAS-technology-for-Norway-in-an-EU-and-NATO-context.pdf
- ↑ An anti-drone device based on capture technology Yingzi Chen, Zhiqing Li, Longchuan Li, Shugen Ma, Fuchun Zhang, Chao Fan, https://doi.org/10.1016/j.birob.2022.100060 https://www.sciencedirect.com/science/article/pii/S2667379722000237
- ↑ Four innovative drone interceptors. Svetoslav ZabunovB, Garo Mardirossian,https://doi.org/10.7546/CRABS.2024.02.09
- ↑ Platt, J., Ricks, K. Comparative Analysis of ROS-Unity3D and ROS-Gazebo for Mobile Ground Robot Simulation. J Intell Robot Syst 106, 80 (2022). https://doi.org/10.1007/s10846-022-01766-2