PRE2024 1 Group2: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
(Added SoTA 12-16)
Line 151: Line 151:


* Sensor fusion should be incorporated into system to improve performance and robustness
* Sensor fusion should be incorporated into system to improve performance and robustness
'''Making drones to kill civilians: is it ethical?'''<ref>Making Drones to Kill Civilians: Is it Ethical? | Journal of Business Ethics (springer.com)</ref>
Usually, anything where harm is done to innocent people is seen as unethical. This would mean that every company which is somehow providing for items in war would do something which is at least partially unethical. However, during war an international law states that a country is not limited by all traditional ethics. This makes deciding on what is ethical and what not harder.
Sociocultural objections to using killer drones against civilians:
-       Civilians (not in war) are getting killed by drones since the drones are not able to see the difference between people in war and people not in war
-       We should not see war as a ‘clash of civilizations’ as this would induce that civilians are also part of war
Is it ethical to use drones to kill civilians?:
-       As said above, an international law applies during war between countries. This law implies:
o  Killing civilians = murder = prohibited
-       People getting attacked by drones, say that it is not the drones who kill people, but people kill people
A simple solution is to follow the 3 laws of robotics from Isaac Asimov:
-       A robot may not injure a human being or allow a human being to come to harm
-       A robot must obey orders given to it by human beings except when such orders conflict with the first law
-       A robot must protect its own existence, provided such protection does not conflict with the first or second law
But following these laws would be too simple, as these laws are not actual laws
The current drone killer’s rationale:
-       A person is targeted only if harmful to the interests of this country so lang as he/she remains alive
This rationale is objected by:
-       This rationale is simply assumed since the international law says nothing about random targeting of individuals
This objection is disproved by:
-       If the target is not in warzone, it is not harmful to the interests of the country, thus such a person would not be a random person
Is it legal and ethical to produce drones that are used to kill civilians?:
A manufacturer of killer drones may not assume its drones are only being used peacefully.
The manufacturers of killer drones often have cautionary recommendations, which are there to put these manufacturers in a legally safe place.
Conclusion:
The problem is that drone killing is not covered in the traditional war laws. Literature is not uniform in opposition to drone killing, but the majority states that killing civilians is unethical.
'''Ethics, autonomy, and killer drones: Can machines do right?'''<ref>Full article: Ethics, autonomy, and killer drones: Can machines do right? (tandfonline.com)</ref>
The article looks into the ethics of certain weapons used in war (in the US). Since we can view back on new weapons back then in war (like atomic bombs) we can see if what they thought then, is what we think now is ethical. The article uses two different viewpoints to decide the ethics of a war technology, namely teleology and deontology. Teleology is focused on the outcome of an action, while deontology focusses more on the duty of an action.
The article looks first at the atomic bomb, which according to a teleologic viewpoint could be seen as ethical, as it would bring an end to war quickly which saves lives in the long term. Deontology also says it could be ethical since it would show superiority to have such strong weapons, which intimidates other countries in war.
Next up in discussion in a torture program. According to teleology this is an ethical thing to do, since torturing some people, to extract critical information from them could be used to prevent more deaths in the future.
Now the article questions AI-enabled drones. For AI ethics, the AI should always be governed by humans, bias should be reduced (lots of civilians are getting killed now) and there should be more transparency.  As for a country this is more challenging since they also have to focusses on safety and winning a war. This is why, in contrast to with the atomic bomb, where teleology and deontology said the same, there now is a contrast between teleology and deontology. Teleology wants to focus on outcome, thus security and protection. Deontology focusses on global values, like human rights. The article says the challenge is to use AI technologies effective while following ethical principles and letting everyone do this.
'''Survey on anti-dron systems: components, designs, and challenges'''<ref>IEEE Xplore Full-Text PDF:</ref>
Requirements an anti-drone system must have:
-       Drone specialized detection (detect the drone)
-       Multi drone defensibility (Defend for multiple drones)
-       Cooperation with security organizations (Restrictions to functionality should be discussed with public security systems (police/military)
-       System portability (lightweight and use wireless networks)
-       Non-military neutralization (Don’t use military weapons to defend for drones)
Ways to detect drones:
-       Thermal detection (Motors, batteries and internal hardware produce heat)
o  Works in all weather
o  Affordable
o  Not too much range
-       RF scanner (capture wireless signals)
o  Can’t detect drones what don’t produce RF signals
o  Long range
-       Radar detection (Detect objects and determine the shape)
o  Long range
o  Can’t see the drone if it is not moving since it thinks it is an obstacle
-       Optical camera detection (detect from a video)
o  Short range
o  Weather dependant
Hybrid detection systems to detect drones
-       Radar + vision
-       Multiple RF scanners
-       Vision + acoustic
Drone neutralization:
-       Hijacking/spoofing (Create fake signal to prevent drone from moving)
-       Geofencing (Prevent drone from approaching a certain point)
-       Jamming (Stopping radio communication between drone and controller)
-       Killer drones (Using drones to damage attacking drones)
-       Capture (Physically capture a drone) (for example with a net)
o  Terrestrial capture systems (human-held or vehicle-mounted)
o  Aerial capture systems (System on defender drones)
Determination of threat level:
-       Object
-       Flight path
-       Available time
(NOTE: The article goes into more depth about some mathematics to determine the threat level, which could be used in our system)
'''Artificial intelligence, robotics, ethics, and the military: a Canadian perspective'''<ref>https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2848</ref>
The article not only looks at the ethics, but also the social and legal aspects of using artificial intelligence in the military. For this it looks at 3 main aspects of AI, namely Accountability and Responsibility, Reliability and Trust.
Accountability and Responsibility:
The article states that the accountability and responsibility of the actions of an AI system are for the operator, which is a human. However, when the AI malfunctions it becomes challenging to determine who is accountable.
Reliability:
AI now is not reliable enough and only performs well in very specific situations where it is made for. During military usage you never know in what situation an AI will be in, thus causing a lack in reliability. A verification of AI technologies is necessary, especially when you are dealing with live and death of humans.
Trust:
People who use AI in military should be thought how the AI works and to what extend they can trust the AI. Too much or too little trust in AI can lead to big mistakes. The makers of these AI systems should be more transparent so it can be understood what the AI does.
We need to have a proactive approach to minimize the risks we have with AI. This means that everyone who uses or is related to AI in military should carefully consider the risks that AI brings.
'''When AI goes to war: Youth opinion, fictional reality and autonomous weapons'''<ref>When AI goes to war: Youth opinion, fictional reality and autonomous weapons - ScienceDirect</ref>
The article looks into the responsibilities and risks of fully autonomous robots in war. It does this by asking youth participants about this together with other research and theory.
The article found that the participants felt that humans should be responsible for actions of autonomous robots. This can be supported by theory which says that since robots do not have emotions like humans do, they cannot be responsible for their actions in the same way as humans. If autonomous robots were programmed with some ethics in mind, the robot could in someway be accounted for its actions as well. How this responsibility between humans and robots should be divided became unclear in this article. Some said responsibility was purely for the engineers, designs and government, while others said that the human and robot had a shared responsibility.
The article also found that there were still fears for fully autonomous robots. This came from old myths and social media which say that autonomous robots can turn against humans to destroy them.
As for the legal part of autonomous robots, they can potentially violate laws during war, especially if they are not accounted responsible for their actions. This causes worries for the youth.
The threats that fully autonomous robots bring outweigh the benefits for the youth. This is a sign for the scientific community to further develop and implement norms and regulations in autonomous robots.


== '''Logbook''' ==
== '''Logbook''' ==
Line 168: Line 337:
|-
|-
|Tim Damen
|Tim Damen
|
|16h
|Attended lecture (2h), Attended meeting with group (2h)
|Attended lecture (2h), Attended meeting with group (2h), Analysed papers [12], [13], [14], [15], [16] (10h), Summarized and described key takeaways for papers [12], [13], [14], [15], [16] (2h)
|-
|-
|Ruben Otter
|Ruben Otter

Revision as of 14:57, 8 September 2024

Group Members

Name Student ID Department
Max van Aken 1859455 Applied Physics
Robert Arnhold 1847848 Mechanical Engineering
Tim Damen 1874810 Applied Physics
Ruben Otter 1810243 Computer Science
Raul Sanchez Flores 1844512 Computer Science / Applied Mathematics

Problem Statement

The goal of this project is to create an easily portable system that can be used as the last line of defense against incoming projectiles.

Objectives

To come up with a solution for our problem we have the following objectives in mind:

  • Determine how drones and projectiles can be detected.
  • Determine how a drone or projectile can be intercepted and/or redirected.
  • Build a prototype of this portable device.
  • Build a simulation of the usage of this system.
  • Explore and determine ethical implications of the portable device.
  • Prove the system’s utility.

Planning

Within we upcoming 8 weeks we will be working on this project. Below are the projects tasks layed out over these 8 weeks and when they will be performed.

Week Task
1 Initial planning and setting up the project.
2 Literary research.
3 Create ethical framework.
Conduct an interview with an expert to confirm and construct the use cases.
Start constructing prototype, software and simulation concepts.
Determine potential problems.
4 Continue constructing prototype, software and simulation.
5 Finish prototype, software and simulation.
6 Testing prototype to verify its effectiveness and use cases.
Evaluate testing results and make final changes.
7 Finish Wiki page.
8 Create final presentation.

Users and their Requirements

We currently have two main usages for this project in mind, which are the following:

  • Military forces facing threats from drones and projectiles.
  • Privately-managed critical infrastructure in areas at risk of drone-based attacks.

The users of the system will require the following:

  • Minimal maintenance
  • High reliability
  • System should not pose additional threat to surrounding
  • System must be personally portable
  • System should work reliably in dynamic, often extreme, environments
  • System should be scalable and interoperable in concept

Literary Research

Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not[1]

A significant challenge with autonomous systems is ensuring compliance with international laws, particularly IHL. The paper delves into how such systems can be designed to adhere to humanitarian law and discusses critical and optional features such as the capacity to identify combatants and non-combatants effectively. This is directly relevant to ensuring our system's utility in operational contexts while adhering to ethical norms.

Artificial Intelligence Applied to Drone Control: A State of the Art[2]

This paper explores the integration of AI in drone systems, focusing on enhancing autonomous behaviors such as navigation, decision-making, and failure prediction. AI techniques like deep learning and reinforcement learning are used to optimize trajectory, improve real-time decision-making, and boost the efficiency of autonomous drones in dynamic environments.

Drone Detection and Defense Systems: Survey and Solutions[3]

This paper provides a comprehensive survey of existing drone detection and defense systems, exploring various sensor modalities like radio frequency (RF), radar, and optical methods. The authors propose a solution called DronEnd, which integrates detection, localization, and annihilation functions using Software-Defined Radio (SDR) platforms. The system highlights real-time identification and jamming capabilities, critical for intercepting drones with minimal collateral effects.

Advances and Challenges in Drone Detection and Classification[4]

This state-of-the-art review highlights the latest advancements in drone detection techniques, covering RF analysis, radar, acoustic, and vision-based systems. It emphasizes the importance of sensor fusion to improve detection accuracy and effectiveness.

Autonomous Defense System with Drone Jamming capabilities[5]

This patent describes a drone defense system comprising at least one jammer and at least one radio detector. The system is designed to send out interference signals that block a drone's communication or GPS signals, causing it to land or return. It also uses a technique where the jammer temporarily interrupts the interference signal to allow the radio detector to receive data and locate the drone's position or intercept its control signals.

Small Unmanned Aerial Systems (sUAS) and the Force Protection Threat to DoD[6]

This article discusses the increasing threat posed by small unmanned aerial systems (sUAS) to military forces, particularly the U.S. Department of Defense (DoD). It highlights how enemies are using these drones for surveillance and delivery of explosives.

The Rise of Radar-Based UAV Detection For Military: A Game-Changer in Modern Warfare[7]

This article discusses how radar-based unmanned aerial vehicle (UAV) detection is transforming military operations. SpotterRF’s systems use advanced radar technology to detect drones in all conditions, including darkness or bad weather. By integrating AI, these systems can distinguish between drones and non-threats like birds, improving accuracy and reducing false positives.

Swarm-based counter UAV defense system[8]

This article discusses autonomous systems designed to detect and intercept drones. It emphasizes the use of AI and machine learning to improve the real-time detection, classification, and interception of drones, focusing on autonomous UAVs (dUAVs) that can neutralize threats. The research delves into algorithms and swarm-based defense strategies that optimize how drones are intercepted.

Small Drone Threat Grows More Complex, Deadly as Tech Advances[9]

The article highlights the growing threat of small UAV to military operations. It discusses how these systems are used by enemies for surveillance and direct attacks, and the various countermeasures the U.S. Department of Defense is developing to stop these attacks. It eplores the use of jamming (interference of connection between drone and controller), radio frequency sensing, and mobile detection systems.

US Army invents 40mm grenade that nets bad drones[10]

This article discusses recently developed technology that involves a 40mm grenade that deploys a net to capture and neutralise hostile drones. This system can be fired from a standard grenade launcher, providing a portable, low-cost method of taking down small unmanned aerial systems (sUAS) without causing significant collateral damage.

Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review[11]

Summary:

  • The paper provides a comprehensive review of drone detection and classification technologies. It delves into the various methods employed to detect and identify drones, including radar, radio frequency (RF), acoustic, and vision-based systems. Each has their strengths and weaknesses, after which the author discusses 'sensor fusion', where the combination of detection methods lead to improvements of system performance and robustness.

Key takeaways:

  • Sensor fusion should be incorporated into system to improve performance and robustness

Making drones to kill civilians: is it ethical?[12]

Usually, anything where harm is done to innocent people is seen as unethical. This would mean that every company which is somehow providing for items in war would do something which is at least partially unethical. However, during war an international law states that a country is not limited by all traditional ethics. This makes deciding on what is ethical and what not harder.

Sociocultural objections to using killer drones against civilians:

-       Civilians (not in war) are getting killed by drones since the drones are not able to see the difference between people in war and people not in war

-       We should not see war as a ‘clash of civilizations’ as this would induce that civilians are also part of war

Is it ethical to use drones to kill civilians?:

-       As said above, an international law applies during war between countries. This law implies:

o  Killing civilians = murder = prohibited

-       People getting attacked by drones, say that it is not the drones who kill people, but people kill people

A simple solution is to follow the 3 laws of robotics from Isaac Asimov:

-       A robot may not injure a human being or allow a human being to come to harm

-       A robot must obey orders given to it by human beings except when such orders conflict with the first law

-       A robot must protect its own existence, provided such protection does not conflict with the first or second law

But following these laws would be too simple, as these laws are not actual laws

The current drone killer’s rationale:

-       A person is targeted only if harmful to the interests of this country so lang as he/she remains alive

This rationale is objected by:

-       This rationale is simply assumed since the international law says nothing about random targeting of individuals

This objection is disproved by:

-       If the target is not in warzone, it is not harmful to the interests of the country, thus such a person would not be a random person

Is it legal and ethical to produce drones that are used to kill civilians?:

A manufacturer of killer drones may not assume its drones are only being used peacefully.

The manufacturers of killer drones often have cautionary recommendations, which are there to put these manufacturers in a legally safe place.

Conclusion:

The problem is that drone killing is not covered in the traditional war laws. Literature is not uniform in opposition to drone killing, but the majority states that killing civilians is unethical.


Ethics, autonomy, and killer drones: Can machines do right?[13]

The article looks into the ethics of certain weapons used in war (in the US). Since we can view back on new weapons back then in war (like atomic bombs) we can see if what they thought then, is what we think now is ethical. The article uses two different viewpoints to decide the ethics of a war technology, namely teleology and deontology. Teleology is focused on the outcome of an action, while deontology focusses more on the duty of an action.

The article looks first at the atomic bomb, which according to a teleologic viewpoint could be seen as ethical, as it would bring an end to war quickly which saves lives in the long term. Deontology also says it could be ethical since it would show superiority to have such strong weapons, which intimidates other countries in war.

Next up in discussion in a torture program. According to teleology this is an ethical thing to do, since torturing some people, to extract critical information from them could be used to prevent more deaths in the future.

Now the article questions AI-enabled drones. For AI ethics, the AI should always be governed by humans, bias should be reduced (lots of civilians are getting killed now) and there should be more transparency.  As for a country this is more challenging since they also have to focusses on safety and winning a war. This is why, in contrast to with the atomic bomb, where teleology and deontology said the same, there now is a contrast between teleology and deontology. Teleology wants to focus on outcome, thus security and protection. Deontology focusses on global values, like human rights. The article says the challenge is to use AI technologies effective while following ethical principles and letting everyone do this.


Survey on anti-dron systems: components, designs, and challenges[14]

Requirements an anti-drone system must have:

-       Drone specialized detection (detect the drone)

-       Multi drone defensibility (Defend for multiple drones)

-       Cooperation with security organizations (Restrictions to functionality should be discussed with public security systems (police/military)

-       System portability (lightweight and use wireless networks)

-       Non-military neutralization (Don’t use military weapons to defend for drones)

Ways to detect drones:

-       Thermal detection (Motors, batteries and internal hardware produce heat)

o  Works in all weather

o  Affordable

o  Not too much range

-       RF scanner (capture wireless signals)

o  Can’t detect drones what don’t produce RF signals

o  Long range

-       Radar detection (Detect objects and determine the shape)

o  Long range

o  Can’t see the drone if it is not moving since it thinks it is an obstacle

-       Optical camera detection (detect from a video)

o  Short range

o  Weather dependant

Hybrid detection systems to detect drones

-       Radar + vision

-       Multiple RF scanners

-       Vision + acoustic

Drone neutralization:

-       Hijacking/spoofing (Create fake signal to prevent drone from moving)

-       Geofencing (Prevent drone from approaching a certain point)

-       Jamming (Stopping radio communication between drone and controller)

-       Killer drones (Using drones to damage attacking drones)

-       Capture (Physically capture a drone) (for example with a net)

o  Terrestrial capture systems (human-held or vehicle-mounted)

o  Aerial capture systems (System on defender drones)

Determination of threat level:

-       Object

-       Flight path

-       Available time

(NOTE: The article goes into more depth about some mathematics to determine the threat level, which could be used in our system)


Artificial intelligence, robotics, ethics, and the military: a Canadian perspective[15]

The article not only looks at the ethics, but also the social and legal aspects of using artificial intelligence in the military. For this it looks at 3 main aspects of AI, namely Accountability and Responsibility, Reliability and Trust.

Accountability and Responsibility:

The article states that the accountability and responsibility of the actions of an AI system are for the operator, which is a human. However, when the AI malfunctions it becomes challenging to determine who is accountable.

Reliability:

AI now is not reliable enough and only performs well in very specific situations where it is made for. During military usage you never know in what situation an AI will be in, thus causing a lack in reliability. A verification of AI technologies is necessary, especially when you are dealing with live and death of humans.

Trust:

People who use AI in military should be thought how the AI works and to what extend they can trust the AI. Too much or too little trust in AI can lead to big mistakes. The makers of these AI systems should be more transparent so it can be understood what the AI does.

We need to have a proactive approach to minimize the risks we have with AI. This means that everyone who uses or is related to AI in military should carefully consider the risks that AI brings.


When AI goes to war: Youth opinion, fictional reality and autonomous weapons[16]

The article looks into the responsibilities and risks of fully autonomous robots in war. It does this by asking youth participants about this together with other research and theory.

The article found that the participants felt that humans should be responsible for actions of autonomous robots. This can be supported by theory which says that since robots do not have emotions like humans do, they cannot be responsible for their actions in the same way as humans. If autonomous robots were programmed with some ethics in mind, the robot could in someway be accounted for its actions as well. How this responsibility between humans and robots should be divided became unclear in this article. Some said responsibility was purely for the engineers, designs and government, while others said that the human and robot had a shared responsibility.

The article also found that there were still fears for fully autonomous robots. This came from old myths and social media which say that autonomous robots can turn against humans to destroy them.

As for the legal part of autonomous robots, they can potentially violate laws during war, especially if they are not accounted responsible for their actions. This causes worries for the youth.

The threats that fully autonomous robots bring outweigh the benefits for the youth. This is a sign for the scientific community to further develop and implement norms and regulations in autonomous robots.

Logbook

Week 1
Name Total Break-down
Max van Aken Attended lecture (2h), Attended meeting with group (2h)
Robert Arnhold Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [11], [12], [13], [14], [15] (10h), Summarized and described key takeaways for papers/patents [11], [12], [13], [14], [15] (2h)
Tim Damen 16h Attended lecture (2h), Attended meeting with group (2h), Analysed papers [12], [13], [14], [15], [16] (10h), Summarized and described key takeaways for papers [12], [13], [14], [15], [16] (2h)
Ruben Otter 17 h Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [1], [2], [3], [4], [5] (10h), Summarized and described key takeaways for papers/patents [1], [2], [3], [4], [5] (2h), Set up Wiki page (1h)
Raul Sanchez Flores 16h Attended lecture (2h), Attended meeting with group (2h), Analysed papers/patents [6], [7], [8], [9], [10] (10h), Summarized and described key takeaways for papers/patents [6], [7], [8], [9], [10] (2h),

References

  1. Willy, Enock, Autonomous Weapons Systems and International Humanitarian Law: Need for Expansion or Not (NOVEMBER 16, 2020). Available at SSRN: https://ssrn.com/abstract=3867978 or http://dx.doi.org/10.2139/ssrn.3867978
  2. Caballero-Martin D, Lopez-Guede JM, Estevez J, Graña M. Artificial Intelligence Applied to Drone Control: A State of the Art. Drones. 2024; 8(7):296. https://doi.org/10.3390/drones8070296
  3. Chiper F-L, Martian A, Vladeanu C, Marghescu I, Craciunescu R, Fratu O. Drone Detection and Defense Systems: Survey and a Software-Defined Radio-Based Solution. Sensors. 2022; 22(4):1453. https://doi.org/10.3390/s22041453
  4. Seidaliyeva U, Ilipbayeva L, Taissariyeva K, Smailov N, Matson ET. Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. Sensors. 2024; 24(1):125. https://doi.org/10.3390/s24010125
  5. Chmielus, T. (2024). Drone defense system (U.S. Patent No. 11,876,611). United States Patent and Trademark Office. https://patentsgazette.uspto.gov/week03/OG/html/1518-3/US11876611-20240116.html
  6. Kovacs, A. (2024, February 1). Small Unmanned aerial Systems (SUAS) and the force protection threat to DOD. RMC. https://rmcglobal.com/small-unmanned-aerial-systems-suas-and-the-force-protection-threat-to-dod/
  7. The rise of Radar-Based UAV Detection for Military: A Game-Changer in Modern Warfare. (2024, June 11). Spotter Global. https://www.spotterglobal.com/blog/spotter-blog-3/the-rise-of-radar-based-uav-detection-for-military-a-game-changer-in-modern-warfare-8
  8. Brust, M. R., Danoy, G., Stolfi, D. H., & Bouvry, P. (2021). Swarm-based counter UAV defense system. Discover Internet of Things, 1(1). https://doi.org/10.1007/s43926-021-00002-x
  9. Small drone threat grows more complex, deadly as tech advances. (n.d.). https://www.nationaldefensemagazine.org/articles/2023/8/30/small-drone-threat-grows-more-complex-deadly-as-tech-advances
  10. Technology for innovative entrepreneurs & businesses | TechLink. (n.d.). https://techlinkcenter.org/news/us-army-invents-40mm-grenade-that-nets-bad-drones
  11. Seidaliyeva, U., Ilipbayeva, L., Taissariyeva, K., Smailov, N., & Matson, E. T. (2024). Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. Sensors, 24(1), 125. https://doi.org/10.3390/s24010125
  12. Making Drones to Kill Civilians: Is it Ethical? | Journal of Business Ethics (springer.com)
  13. Full article: Ethics, autonomy, and killer drones: Can machines do right? (tandfonline.com)
  14. IEEE Xplore Full-Text PDF:
  15. https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2848
  16. When AI goes to war: Youth opinion, fictional reality and autonomous weapons - ScienceDirect