1 min

OpenAI opens door to military use of its tech

ChatGPT publisher adopts new terms of use.

On January 10, 2024, ChatGPT’s publisher, OpenAI, announced it had “updated [its] terms of use for more clarity and to provide more specific services advice.” In particular the company amended the AI model use cases it prohibits. From now on, OpenAI authorizes all uses by default, “as long as they respect the law and do not harm the user or others.”

The wording excludes all illegal uses, such as creating malware or violent, hateful and discriminatory content. In particular it prohibits the use of OpenAI to “promote suicide or self-harm, […] develop or use weapons, […] hurt others or destroy property, […] or monitor communications.”

However, the update omits some use cases from the previous list of banned uses, including “military and warfare.” Therefore it is implied that OpenAI authorizes the use of its AI models for military purposes, as long as they do not involve weapons or espionage. In a statement to TechCrunch, OpenAI explains there are indeed “national security use cases that are in keeping with our purpose.”

For example, we are already working with DARPA [Defense Advanced Research Projects Agency, editor’s note] to boost the development of new cybersecurity tools, in order to secure open source software on which critical infrastructure and industries depend,” stated the company.

It wasn’t clear whether these beneficial use cases were authorized under the ‘military’ sections of our previous policy. The goal of this update is thus to provide clarity and make these discussions possible,” added OpenAI.

Send this to a friend