Latest News, Local News, International News, US Politics, Economy

US Air Force Dismisses Allegations of AI Drone ‘Killing’ Operator in Simulation

The US air force denied running a scenario using artificial intelligence in which a drone opted to murder  its pilot in order to stop him or her from interfering with the drone’s attempts to complete its job.

A US military drone operated by AI employed very surprising techniques to achieve its goal in a virtual test, a senior official claimed last month.

US Air Force Controversy Surrounding AI in Military

In a simulated test, Col. Tucker ‘Cinco’ Hamilton described how an artificial intelligence-powered drone was instructed to destroy an enemy’s air defense systems and ultimately attacked anyone who got in the way of that command.

A lot of online discussion over the employment of AI in weapons was generated by the comments.

But, the US Air Force denied conducting the test on Thursday night.

Col. Hamilton has since apologized for his remarks and stressed that the simulation of the rogue AI drone was only a thought exercise, according to a statement released by the Royal Aeronautical Society on Friday.

The debate occurs at a time when the US government is debating how to regulate artificial intelligence. 

Researchers and AI ethicists who share concerns about the technology contend that while it has lofty ambitions, such as maybe curing cancer, for example, it is still a long way off.

Read more: Contamination Crisis: US Navy Ship’s Fuel Dump Contaminates Water Supply, Resulting In Ongoing Illnesses For Sailors

Evidence of Harms and the Ethical Dilemmas 

us-air-force-dismisses-allegations-of-ai-drone-killing-operator-in-simulation
The US air force denied running a scenario using artificial intelligence in which a drone opted to murder its pilot in order to stop him or her from interfering with the drone’s attempts to complete its job.

In the meantime, they point to long-standing evidence of existing harms, such as increased use of, occasionally unreliable surveillance systems that mistakenly identify black and brown people and can result in overpolicing and false arrests, the propagation of misinformation on many platforms, as well as the potential harms of using emerging technology to power and operate weapons in crisis situations.

Col. Hamilton claims the thought experiment is still important to take into account when deciding whether and how to utilize AI in weapons, even though the simulation he mentioned did not actually take place.

“Despite being a hypothetical scenario, this demonstrates the real-world difficulties faced by AI-powered capacity and is why the Air Force is committed to the ethical development of AI,” he said in a statement that clarified his opening statements.

The US air force spokesperson Ann Stefanek said that the colonel’s remarks were misinterpreted in a statement to Insider.

Read more: Sweden Set To Become World’s First Smoke-Free Country

Leave A Reply

Your email address will not be published.