Post available to Premium Members only. Please upgrade your account in order to apply.
According to U.S. Department of Defence, Lethal Autonomous Weapons Systems (L.A.W.S.) are “weapon systems that, once activated, can select and engage targets without further intervention by a human operator.”
Only this phrase should give pause.
We are talking about an Artificial Intelligence (A.I.) who it selects its own targets using purely algorithms and a complicate system based on binary encoding.
Are we really allowing a set of numbers to decide who lives and who dies?
This could be a reductive definition about how the L.A.W.S. thinks, but it is all here.
Our destiny is in the hands of a sequence of bits.
We cannot allow this.
Who are we to decide who lives and who dies? Nobody!
Imagine what an A.I. could know…
Artificial intelligences have no feelings.
They rely only on the mathematical rationality to do their work.
This is not the right way to use the technology.
Such a powerful AI should be used to fix real problems, not to destroy each other!
Technology like A.I. will be the future; if used in the right way they will improve our life quality! The life quality of everybody! We cannot allow a use in war of this technology.
It is not a great feeling to be frightened all the time because “congratulation! You have been chosen as a target!” from a “robot killer” programmed by someone and situated kilometres away from your position.
It is not acceptable.
After all, who takes the responsibility?
- The Machine?
- The team who has programmed and/or built it?
- Who allowed this project?
A Machine with an AI can learn how to do many things, include killing.
However, it learns from human actions and behaviour!
The responsibility can be attributed only to a human being. No exclusions.
Perhaps the ethical question is the last think that a State thinks before building or buying one of these weapons.
This is the biggest mistake that someone could do.
This is just another step to the human alienation.
This must be stopped!
Many campaigns have been already launched against such use of technology and these are getting very popular.
However, the risk is still behind the corner.