The wording excludes all illegal uses, such as creating malware or violent, hateful and discriminatory content. In particular it prohibits the use of OpenAI to “promote suicide or self-harm, […] develop or use weapons, […] hurt others or destroy property, […] or monitor communications.”
However, the update omits some use cases from the previous list of banned uses, including “military and warfare.” Therefore it is implied that OpenAI authorizes the use of its AI models for military purposes, as long as they do not involve weapons or espionage. In a statement to TechCrunch, OpenAI explains there are indeed “national security use cases that are in keeping with our purpose.”
“For example, we are already working with DARPA [Defense Advanced Research Projects Agency, editor’s note] to boost the development of new cybersecurity tools, in order to secure open source software on which critical infrastructure and industries depend,” stated the company.
“It wasn’t clear whether these beneficial use cases were authorized under the ‘military’ sections of our previous policy. The goal of this update is thus to provide clarity and make these discussions possible,” added OpenAI.