欢迎来到久草影视网

久草影视网

OpenAI removes military and warfare prohibitions from its policies

时间:2024-09-22 14:28:31 出处:新闻中心阅读(143)

OpenAI may be paving the way toward finding out its AI's military potential.

First reported by the Intercepton Jan 12., a new company policy change has completely removed previous language that banned “activity that has high risk of physical harm," including specific examples of “weapons development” and “military and warfare.”

As of Jan. 10, OpenAI's usage guidelines no longer included a prohibition on "military and warfare" uses in existing language that obligates users to prevent harm. The policy now only notes a ban on utilizing OpenAI technology, like its Large Language Models (LLMs), to "develop or use weapons."

SEE ALSO:What is the Rabbit R1 AI Assistant and why is everyone going crazy for it?

Subsequent reporting on the policy edit pointed to the immediate possibility of lucrative partnerships between OpenAI and defense departments seeking to utilize generative AI in administrative or intelligence operations.

In Nov. 2023, the U.S. Department of Defense issued a statement on its mission to promote "the responsible military use of artificial intelligence and autonomous systems," citing the country's endorsement of the international Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy — an American-led "best practices" announced in Feb. 2023 that was developed to monitor and guide the development of AI military capabilities.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

"Military AI capabilities includes not only weapons but also decision support systems that help defense leaders at all levels make better and more timely decisions, from the battlefield to the boardroom, and systems relating to everything from finance, payroll, and accounting, to the recruiting, retention, and promotion of personnel, to collection and fusion of intelligence, surveillance, and reconnaissance data," the statement explains.

AI has already been utilized by the American military in the Russian-Ukrainian war and in the development of AI-powered autonomous military vehicles. Elsewhere, AI has been incorporated into military intelligence and targeting systems, including an AI system known as "The Gospel," being used by Israeli forces to pinpoint targets and reportedly "reduce human casualties" in its attacks on Gaza.

AI watchdogs and activists have consistently expressed concern over the increasing incorporation of AI technologies in both cyber conflict and combat, fearing an escalation of arms conflict in addition to long-noted AI system biases.

In a statement to the Intercept, OpenAI spokesperson Niko Felix explained the change was intended to streamline the company's guidelines: "We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs. A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples."


Related Stories
  • OpenAI launches its own GPT Store
  • The New York Times sues OpenAI and Microsoft for copyright infringement
  • OpenAI releases ChatGPT data leak patch, but the issue isn't completely fixed
  • It’s not just you. ChatGPT is ‘lazier,’ OpenAI confirmed.
  • This creepy AI head at CES 2024 is proof that ChatGPT should remain faceless

An OpenAI spokesperson further clarified the change in an email to Mashable: "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPAto spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."

OpenAI introduces its usage policies in a more simplistic refrain: "We aim for our tools to be used safely and responsibly, while maximizing your control over how you use them."

UPDATE: Jan. 16, 2024, 12:28 p.m. EST This article has been updated to include an additional statement from OpenAI.

分享到:

温馨提示:以上内容和图片整理于网络,仅供参考,希望对您有帮助!如有侵权行为请联系删除!

友情链接: