
- Home
- Risks management
- Shadow AI: Companies Structuring Their Approach
Shadow AI: Companies Structuring Their Approach


According to a Forrester analysis from last year, 60% of employees will use their own AI this year for some daily tasks related to their roles, contributing to the phenomenon of shadow AI. Employees bypass their organization’s security policies because they believe it’s the most effective way to do their jobs.
“In 2024, shadow AI will grow as organizations struggling to manage regulatory, privacy, and security issues won’t be able to cope with the widespread adoption of BYOAI (bring your own AI). Alongside generic AI tools, employees will also use personal AI software, further boosting BYOAI in the coming year,” states the research firm’s website.
For Stéphane Roder, CEO of AI Builders, the advent of highly efficient generative AI tools has been a real disaster. “ChatGPT has given incredible access to AI and exploded shadow AI. Whereas previously a data scientist was needed to manipulate AI, three clicks sufficed with ChatGPT. But it was a real disaster for two reasons. The first is the tool’s power and attractiveness, which provided incredible services to employees. The second is that the tool was made available for free without clearly stating that everything entered belongs to OpenAI, the company that designed ChatGPT.”
Generative AI: the second most cited Risk by corporate risk managers
The concern expressed by Stéphane Roder is echoed by Gartner, which in August 2023 highlighted that the widespread availability of generative AI had become one of the main concerns for corporate risk managers in the second quarter of 2023. “Generative AI is the second most frequently cited risk in our second-quarter survey, appearing for the first time in the top 10. This reflects both the rapid growth in public awareness and use of generative AI tools and the broad range of potential use cases—and therefore potential risks—these tools generate,” said Ran Xu, Director of Research at Gartner Risk & Audit Practice.
Beyond information leaks, which are also found in traditional shadow IT, generative AI tools present two additional, specific risks: “If a SaaS solution has a vulnerability, the risk is that data will be exposed. With generative AI, the problem comes from the trust in the model generating the results. The two main issues are biases and hallucinations. Biases are due to non-representative data. Hallucination is much more complex; it involves the vectorization of data that matched information which should not have been matched,” notes Christophe Menant, Head of Cybersecurity Offerings at Capgemini France.
Measures taken by Samsung, Amazon, and Goldman Sachs, among others
In April 2023, Samsung suffered from confidential information leaks via ChatGPT. Three employees from the semiconductor division used the conversational agent’s services for various tasks: optimizing database source code (searching for potential flaws), verifying the quality of program code lines, and summarizing a meeting.
A month later, the Korean company announced it would temporarily ban the use of generative AI services on company-owned smartphones, computers, and tablets. This ban applied to employees in the mobile and home appliance departments, with Samsung stating it was seeking ways to use generative AI services in a “safe environment for employees to improve work efficiency and convenience” (source: AFP).
Other major global groups have also banned or significantly restricted the use of generative AI tools. This includes banks like Goldman Sachs, Wells Fargo, Deutsche Bank, JPMorganChase, and Bank of America, as well as tech companies like Verizon and Apple, and the American e-commerce giant Amazon.
Refocusing after initial experimentation
In this context, companies are considering how to react beyond simply banning or restricting the use of these tools. Since the usage is prevalent, general management, IT departments, and business units must handle this issue seriously. “After allowing initial experimentation, it’s time to refocus. This is the major challenge for all large organizations. First and foremost, a thorough study is necessary to understand who uses what and whether it’s worth investing in a dedicated, private infrastructure,” warns Stéphane Roder.
However, this in-depth study relies primarily on human elements, reminds Marion Videau, Chief Scientific Officer at Quarkslab: “Understanding usage requires human mechanisms to be put in place, in a sincere relationship regarding daily tasks and potential tools. These human mechanisms are based on trust and involve work processes and methods. They relate to the best practices of the profession. Additionally, it is essential to raise employee awareness about using tools that match the sensitivity level of the information. If you handle sensitive data, you cannot use the transcription and summary features in your favorite videoconferencing tool.”
If the internal study reveals the necessity of industrializing the deployment of generative AI tools, then a “data office” is necessary. “A data office is a hub that sets rules for all employees and provides a dedicated infrastructure,” explains Stéphane Roder.
However, this type of solution comes with a cost. It is crucial to know precisely who genuinely needs a license. “It’s important not to go to extremes but to find a balance between total prohibition and equipping everyone. Companies that ban access frustrate their employees and create a negative internal image. Companies that equip 100% of their employees experience waste and unnecessary costs. Those that allocate access to the right people find their choice relevant and the access well-used,” comments Stéphane Roder.
A balance that seems to have been found by the Axa Group through its business unit, Axa Group Operations. In July 2023, it deployed a service called “Axa Secure GPT” allowing all its employees to “generate, summarize, translate, and correct texts, images, and code” securely, according to a press release. This platform is based on a partnership between Axa and Microsoft and relies on Microsoft’s Azure Open AI technology.
“The use of open tools can lead to serious issues, including data leaks, security breaches, and loss of intellectual property. Axa has once again demonstrated its ability to innovate quickly, leveraging its cloud-based infrastructure. By doing so, Axa becomes one of the first global insurers to develop such a platform at scale while managing potential risks,” said Alexander Vollert, Group Chief Operating Officer and CEO of Axa Group Operations.
the newsletter
the newsletter