Models could allow attackers to take control of devices, or inject malicious code. 

On February 27, 2024, the IFrog cybersecurity software provider revealed it had discovered a hundred or so malicious AI models on the open source community platform, Hugging Face. Launched in 2016, the latter offers over 500,000 open source machine-learning models.

Its popularity increases the potential for harm due to the malicious code in the models it hosts. “When we talk about malicious templates, we mean those that contain real, harmful, payloads. The count excludes false positives, which guarantees an accurate picture of how actions undertaken to produce malicious models on Hugging Face are distributed,” reads IFrog’s report.

For the most part, the malicious models aim to take control of devices, particularly by setting up reverse shells. Some make it possible to inject arbitrary code. In most cases, the malicious payload is undetectable by the infected device.

Among the developers of these problematic models, IFrog identified several artificial intelligence researchers. According to the authors of the report, the researchers were attempting to “run the code for apparently legitimate purposes.”

Hugging Face responded by stating that the platform regularly analyzed the models it uploaded, in particular to assess if they contained malicious payloads. IFrog concludes by warning users of this type of open source resource. To avoid any risks, a study of the suggested code seems to be an essential prerequisite for any operational implementation.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.