In the latest sickening turn in the coming SKYNET AI wars, an AI-controlled drone in a simulated test conducted by the U.S. Air Force went rogue and fatally attacked its human operator in the simulation. But don’t worry, the U.S. Air Force is seemingly okay with AI doing this in the real world as they said during a presentation that “it killed the operator because that person was keeping it from accomplishing its objective.”

Ahhh, okay no we see where they are going. They have ZERO regard for human life, including their own operators.

During a presentation at the Future Combat Air and Space Capabilities Summit in London, Col Tucker ‘Cinco’ Hamilton, the USAF’s Chief of AI Test and Operations, shed light on the incident. Hamilton discussed the advantages and disadvantages of an autonomous weapon system with a human involved in making the final decision on whether to proceed with an attack. As described by Tim Robinson and Stephen Bridgewater in a blog post for the Royal Aeronautical Society, the AI system used unexpected strategies to achieve its objectives, even targeting U.S. personnel and critical infrastructure.

“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post. 

He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Hamilton is the Operations Commander of the 96th Test Wing of the U.S. Air Force as well as the Chief of AI Test and Operations and is part of a team that is working on making F-16 planes autonomous. 

While the U.S government previously denied having these systems, in December 2022, the U.S. Department of Defense’s research agency, DARPA, announced that AI could successfully control an F-16

“In early December 2022, ACE algorithm developers uploaded their AI software into a specially modified F-16 test aircraft known as the X-62A or VISTA (Variable In-flight Simulator Test Aircraft), at the Air Force Test Pilot School (TPS) at Edwards Air Force Base, California, and flew multiple flights over several days,” a press release by DARPA said. “The flights demonstrated that AI agents can control a full-scale fighter jet and provided invaluable live-flight data.” 

As we warned last month, Lethal autonomous weapon systems (LAWS), a dystopian and controversial category of weaponry, are being integrated into our military in an effort to create high-tech killing machines that rely on sensors and computer algorithms to autonomously identify and obliterate/kill targets.

Around that same time, the secretive and controversial Bilderberg Meeting was help in Lisbon, Portugal, to discuss the future of artificial intelligence (A.I.).