United Kingdom: launch of open-source AI assessment platform
Articles by the same author:
1
2
3
Created by the AI Safety Institute, the free tool named “Inspect” will test the capabilities and risks of large language models.
On May 10, 2024, in the United Kingdom, the AI Safety Institute uploaded “Inspect”, a free, open-source, AI assessment platform. The British government announced the creation of the AI Safety Institute in November 2023, during the AI Safety Summit, an international conference on AI security and regulation. The government tasked the Institute with testing AI models, before or after their release, to assess and reduce the risks involved in their rollout.
Inspect offers a software library allowing publishers to assess the specific capabilities of their LLM. The platform is compatible with Anthropic, Google, Hugging Face, Microsoft and OpenAI, as well as models hosted on Azure AI, Amazon Bedrock and Cloudflare.
In regard to the training data, Inspect can analyze text, but also multimodal data, for example containing visuals. The open-source platform thus supports the recent growth of multimodal models, like Gemini (Google), MM1 (Apple), GPT-4, and Claude 3.
“The open-source software can allow more people to contribute, counteract power centralization, improve transparency and replicability, give end-users greater control over their tools and cut costs for everyone,” argues Ian Hogarth, Chair of the AI Safety Institute.
“One of AI’s structural challenges is the need to coordinate beyond borders and institutions. It may be an uncomfortable truth, but open-source software is currently one of the ways the United States and China are ‘working together’ in AI research,” he added.