by Harbinger Daily
Recently the U. S. Air Force conducted a simulation in which an
AI-enabled drone tasked with destroying surface-to-air missile sites
attacked the very officer who was controlling it. The story sent
shockwaves around the world with headlines such as, “AI Could Go Rogue
and Kill Its Human Operators.
The Airforce later walked back the incident, but one official with firsthand knowledge of the test said in a blogpost,
“[The AI] killed the operator. It killed the operator because that
person was keeping it from accomplishing its objective.” He added, “We
trained the system: ‘Hey, don’t kill the operator – that’s bad. You’re
gonna lose points if you do that.’ So, what does it start doing? It
starts destroying the communication tower that the operator uses to
communicate with the drone to stop it from killing the target.”
"..Losing points.." What kind of a deterrent is that? - ED