Service provider for generative AI specialist accidentally sent file with personal customer information to third party.

On January 22, 2024, Anthropic, the US specialist in generative AI, confirmed it had suffered a data leak. At the end of 2023, one of its service providers accidentally sent a file containing a “subset” of Anthropic customer information and “open credit balances.” According to the publisher of LLM Claude, the leak did not contain sensitive data, namely banking or payment details.

Anthropic did not specify how many customers were affected and stated it had “no knowledge of malicious behavior resulting from the leak.” However, the company urged its customers to be very watchful for scams. In particular, it mentions potential “payment requests, payment instruction changes, emails with suspicious links, and identification information requests.”

Our investigation shows that this was an isolated incident, caused by human error, and not by a leak in Anthropic’s systems,” assured the startup. However, the announcement was made in the midst of tensions for AI giants. At the end of January 2024, the Federal Trade Commission (FTC), the US competition authority, indeed launched an investigation into five industry players:

  • two generative AI startups, LLM market leaders: Anthropic and OpenAI;
  • three Big Tech companies: Alphabet (Google parent company), Amazon and Microsoft.

The FTC fears strategic partnerships established between the former and the latter may cause market distortions.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.