Drones will soon decide who to kill
Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.
Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society.
There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process, Peter Lee writes for The Conversation.
At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.
Existing lethal military drones like the MQ-9 Reaper are carefully controlled and piloted via satellite. If a pilot drops a bomb or fires a missile, a human sensor operator actively guides it onto the chosen target using a laser.
Ultimately, the crew has the final ethical, legal and operational responsibility for killing designated human targets.
As one Reaper operator states: "I am very much of the mindset that I would allow an insurgent, however important a target, to get away rather than take a risky shot that might kill civilians."
Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war. The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.
And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided.
The weakness in this argument is that you don't have to be responsible for killing to be traumatised by it.
Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes.
Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.