Between April 2023 and February 2024, Google received 258 and 86 complaints regarding these two illegal uses of its generative AI model.
On March 5, 2025, Google submitted a report to the Australian eSafety Commission detailing terrorist and pedocriminal deepfakes generated by its AI model, Gemini. Between April 2023 and February 2024, the tech giant received 258 and 86 complaints, respectively, for these illegal uses.
According to Julie Inman Grant, eSafety Commissioner, this report, which represents a “global first,” “highlights how critical it is for companies developing AI products to integrate and test the effectiveness of safeguards aimed at preventing the creation of such content.”
Australian law requires tech companies to regularly provide eSafety with information on the measures implemented to strengthen the protection of their systems. The commission has not yet reviewed the reports submitted by Google.
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.