Current Events > Drones will soon decide who to kill on their own

Topic List
Page List: 1
dreamvoid
04/12/18 6:41:35 PM
#1:


https://www.upi.com/Drones-will-soon-decide-who-to-kill/3561523449841/

The U.S. Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence. This is a big step forward. Whereas current military drones are still controlled by people, this new technology will decide who to kill with little human involvement.

Once complete, these drones will represent the ultimate militarization of AI and trigger vast legal and ethical implications for wider society. There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process. At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

Existing lethal military drones like the MQ-9 Reaper are carefully controlled and piloted via satellite. If a pilot drops a bomb or fires a missile, a human sensor operator actively guides it onto the chosen target using a laser.

Ultimately, the crew has the final ethical, legal and operational responsibility for killing designated human targets. As one Reaper operator states: "I am very much of the mindset that I would allow an insurgent, however important a target, to get away rather than take a risky shot that might kill civilians."

Even with these drone killings, human emotions, judgments and ethics have always remained at the center of war. The existence of mental trauma and post-traumatic stress disorder among drone operators shows the psychological impact of remote killing.

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings. But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are becoming evident. Under current international humanitarian law, "dual-use" facilities -- those that develop products for both civilian and military application -- can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

Ethically, there are even darker issues. The whole point of the self-learning algorithms -- programs that independently learn from whatever data they can collect -- that technology uses is that they become better at whatever task they are given. If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development -- how much it still has to learn -- at which it can be deployed. In militarized machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.


humanity had a good run.
---
... Copied to Clipboard!
Kid_Buu
04/12/18 6:42:06 PM
#2:


uh oh
---
anime girls either need a dick or thicc for me to want to have relations with them
im kaname btw
... Copied to Clipboard!
hockeybub89
04/12/18 6:42:26 PM
#3:


... Copied to Clipboard!
Mistere Man
04/12/18 6:42:34 PM
#4:


... Copied to Clipboard!
MC_BatCommander
04/12/18 6:43:31 PM
#5:


Project Insight?
---
The Legend is True!
... Copied to Clipboard!
more_cow_bell
04/12/18 6:43:42 PM
#6:


... Copied to Clipboard!
PiOverlord
04/12/18 6:44:38 PM
#7:


A.I is the next evolution of humans. Homo Sapiens will become extinct, but humans will live on.

Any resistance to this is a resistance to our future.
---
Number of legendary 500 post topics: 26, 500th posts: 18; PiO ATTN: 2
RotM wins 1, https://imgur.com/a/JTCCy JUST MONIKA JUST MONIKA JUST MONIKA JUST MONIKA
... Copied to Clipboard!
Shmashed
04/12/18 6:45:12 PM
#8:


... Copied to Clipboard!
thronedfire2
04/12/18 6:45:49 PM
#9:


Yes, teach the AI to indiscriminately murder humans. What could possibly go wrong?
---
I could see you, but I couldn't hear you You were holding your hat in the breeze Turning away from me In this moment you were stolen...
... Copied to Clipboard!
Renault
04/12/18 6:47:11 PM
#10:


the ones that have the complexion of @MacadamianNut3
---
whats cat up to
... Copied to Clipboard!
Webmaster4531
04/12/18 6:48:20 PM
#11:


thronedfire2 posted...
Yes, teach the AI to indiscriminately murder humans. What could possibly go wrong?

It's not indiscriminate.
---
Ad Hominem.
... Copied to Clipboard!
P4wn4g3
04/12/18 6:48:40 PM
#12:


The Grim Reaper has arrived.
---
Hive Mind of Dark Aether, the unofficial Metroid Social Private board.
https://www.gamefaqs.com/boards/851-dark-aether
... Copied to Clipboard!
Gheb
04/12/18 6:50:45 PM
#13:


thronedfire2 posted...
Yes, teach the AI to indiscriminately murder humans. What could possibly go wrong?

To be fair. They are teaching them to discriminately murder humans. These things process their targets and then decide to kill them or not. An indiscriminate murder bot would just kill everyone.

I honestly think the former is more frightening.
---
S*** I have to stop doing that," Gheb said, as he lay back down and died again. - Forgotten Love
Chiefs are going to win the Super Bowl
... Copied to Clipboard!
SageHarpuia
04/12/18 6:51:06 PM
#14:


What could possibly go wrong?
---
"You will pay dearly for your futile resistance!"
... Copied to Clipboard!
untrustful
04/12/18 6:51:27 PM
#15:


This is to prevent other countries from using a signal jammer to stop a drone from following orders, as is what happened in Syria when Russia jammed several drones. If drones come installed with directions, there's no need for a signal to give directions, which means you can't jam drones anymore and hope they stop what they're doing.
---
... Copied to Clipboard!
Swagnificent119
04/12/18 6:51:47 PM
#16:


MRW the military decides to make Hunter Killers from The Terminator

https://www.youtube.com/watch?v=TEoTQB7h3NQ" data-time="

---
... Copied to Clipboard!
uwnim
04/12/18 6:53:58 PM
#17:


This just seems like a bad idea. This is how you get independent kill bots who proceed to drive us to extinction.
---
I want a pet Lavos Spawn.
[Order of the Cetaceans: Phocoena dioptrica]
... Copied to Clipboard!
SpiralDrift
04/12/18 7:24:00 PM
#18:


I for one welcome our future AI overlords.
---
Do unto others what your parents did to you.
... Copied to Clipboard!
MacadamianNut3
04/12/18 7:30:01 PM
#19:


Talk about fear mongering. I mean it is a legit concern, but this is just combining existing technologies and making it efficient enough to put on a robot

Just half a year ago people were memeing about that YOLO object detection algorithm and it was all neato and coolio. You'd have to have zero awareness to not realize the logical next step would be to combine that with techniques from biometrics research and boom you have fast object detection + identification. I just say YOLO because it's a popular example and my group has already plopped then onto the robots we use because it's faster and more lightweight than other long existing algorithms.

Army isn't bringing doomsday. The tools were already out there in a very public eye. It would be pretty dumb for them not to explore it since literally any other country would do the same. While demigod Elon Musk is talking about the dangers of AI, he's developing autonomous cars at the same time. And who exactly do you think is gonna use approaches developed by his group if it matures enough, but then we'll be surprised to see autonomous tanks that can zoom through urban areas 20 years from now

Shit is already all out there and it's no surprise it's being combined
---
Roll Tide & Go Irish
... Copied to Clipboard!
Distant_Rainbow
04/12/18 7:30:25 PM
#20:


Now all they have to do is plug the world's nuclear arsenal to one of these and the prophecy will be fulfilled.
---
Link meets Fire Emblem in CYOA: Tales of Elibe! Come read, and find out what happens! Click below!
https://www.gamefaqs.com/boards/468480-fire-emblem/76125431
... Copied to Clipboard!
Cal12
04/12/18 7:32:16 PM
#21:


MacadamianNut3 posted...
Talk about fear mongering. I mean it is a legit concern, but this is just combining existing technologies and making it efficient enough to put on a robot

Just half a year ago people were memeing about that YOLO object detection algorithm and it was all neato and coolio. You'd have to have zero awareness to not realize the logical next step would be to combine that with techniques from biometrics research and boom you have fast object detection + identification. I just say YOLO because it's a popular example and my group has already plopped then onto the robots we use because it's faster and more lightweight than other long existing algorithms.

Army isn't bringing doomsday. The tools were already out there in a very public eye. It would be pretty dumb for them not to explore it since literally any other country would do the same. While demigod Elon Musk is talking about the dangers of AI, he's developing autonomous cars at the same time. And who exactly do you think is gonna use approaches developed by his group if it matures enough, but then we'll be surprised to see autonomous tanks that can zoom through urban areas 20 years from now

Shit is already all out there and it's no surprise it's being combined


You can think that if you want but in my mind this is Skynet coming to fruition.
... Copied to Clipboard!
MacadamianNut3
04/12/18 7:38:12 PM
#22:


Cal12 posted...
You can think that if you want but in my mind this is Skynet coming to fruition.

The "I watched Terminator/2001 once and now I base all of my beliefs towards AI/robotics on it" schtick is played out broseph
---
Roll Tide & Go Irish
... Copied to Clipboard!
dreamvoid
04/13/18 7:20:00 PM
#23:


MacadamianNut3 posted...
Cal12 posted...
You can think that if you want but in my mind this is Skynet coming to fruition.

The "I watched Terminator/2001 once and now I base all of my beliefs towards AI/robotics on it" schtick is played out broseph

But seriously though, it's obviously 2 sides with this. Applications where it's used to kill people and other fucked up stuff, and applications to help people. And all of this crap being put out in the open (data to train the algorithms is the real secret sauce that nobody wants to share), the public can be more informed about the inner workings of these algorithms and do whatever with them, maybe even actively counteract other entities using them. Like how everyone and their grandma knows you can fuck over many sophisticated object detection algorithms by slightly changing a subset of pixels leaving the image still easily identifiable by a person, because GANs have been discussed a ton in open forums

you are clearly an ai trying to downplay this. it appears we have already been infiltrated. i really do think the ethical implications are pretty bad, though. there are already civilian casualties with humans involved.
---
... Copied to Clipboard!
Topic List
Page List: 1