Goal is to identify flaws specific to emerging tech.

On October 26, 2023, Google announced it was extending its bug bounty program to generative AI apps. The web giant recently formed a team dedicated to AI cyber protection, named the “AI Red Team”. It helps assess which AI model vulnerability reporting deserves compensation.

Very typically, Google’s bug bounty covers vulnerabilities that can be exploited to manipulate or steal generative AI models. However, it is targeted toward flaws that are characteristic of this tech, such as prompt injection attacks. The latter involve requesting large language models (LLM) to comply with contradictory prompts, leading it to ignore safeguards designed by developers.

With its bug bounty program, Google will also compensate the discovery of vulnerabilities that allow “training data extraction.” These vulnerabilities make it possible to reconstruct a model’s training data, in particular non-public personal information and passwords. However, the tech giant will not offer compensation for bugs tied to copyright issues or non-sensitive or public data extraction.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.