"We must face a world where AI is already here and transforming our society,” Hamilton said in an interview with Defence IQ Press in 2022. Department of Defense’s research agency, DARPA, announced that AI could successfully control an F-16. ![]() Hamilton is part of a team that is currently working on making F-16 planes autonomous. Hamilton and the 96th previously made headlines for developing Autonomous Ground Collision Avoidance Systems ( Auto-GCAS) systems for F-16s, which can help prevent them from crashing into the ground. The 96th tests a lot of different systems, including AI, cybersecurity, and various medical advances. Air Force as well as the Chief of AI Test and Operations. ![]() Hamilton is the Operations Commander of the 96th Test Wing of the U.S. Air Force’s 96th Test Wing and its AI Accelerator division, the Royal didn’t immediately return our request for comment. "It appears the colonel's comments were taken out of context and were meant to be anecdotal." "The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," Air Force spokesperson Ann Stefanek told Insider. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target” You’re gonna lose points if you do that’. He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post. ![]() So what did it do? It killed the operator. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. And then the operator would say yes, kill that threat. “We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. As relayed by Tim Robinson and Stephen Bridgewater in a blog post and a podcast for the host organization, the Royal Aeronautical Society, Hamilton said that AI created “highly unexpected strategies to achieve its goal,” including attacking U.S. Before Hamilton admitted he misspoke, the Royal Aeronautical Society said Hamilton was describing a "simulated test" that involved an AI-controlled drone getting "points" for killing simulated targets, not a live test in the physical world.Īfter this story was first published, an Air Force spokesperson told Insider that the Air Force has not conducted such a test, and that the Air Force official’s comments were taken out of context.Īt the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Hamilton held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final "yes/no" order on an attack. ![]() Air Force in order to override a possible "no" order stopping it from completing its mission. Initially, Hamilton said that an AI-enabled drone "killed" its human operator in a simulation conducted by the U.S. "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI" Tucker “Cinco” Hamilton, the USAF's Chief of AI Test and Operations, said in a quote included in the Royal Aeronautical Society’s statement. "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Col.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |