The UK government is shifting its focus on artificial intelligence from general safety concerns to cybersecurity and national security. The AI Safety Institute, established just over a year ago, has been renamed the AI Security Institute. This change reflects the government’s broader strategy to boost the economy and industry through AI development. The newly rebranded institute will now concentrate on mitigating the risks AI poses to national security and preventing its use in criminal activities.
Coinciding with this shift, the government has announced a new partnership with Anthropic, an AI safety and research company. While specific services haven’t been detailed, the memorandum of understanding outlines plans to explore the use of Anthropic’s AI assistant, Claude, in public services.
Anthropic will also contribute to research in scientific modeling and provide tools for evaluating AI capabilities to identify security risks at the AI Security Institute. Anthropic CEO Dario Amodei expressed enthusiasm about the potential of Claude to enhance public services and improve access to vital information for UK residents.
While Anthropic is the sole partner announced this week, the government has indicated its intention to collaborate with various foundational AI companies. Earlier this year, new tools unveiled by the government were powered by OpenAI, and officials have stated their commitment to working with a range of AI providers. This shift was evident in the government’s “Plan for Change” announced in January.
The government aims to stimulate investment, foster the growth of homegrown tech companies, and integrate AI into various aspects of public service, including AI assistants for civil servants and digital wallets for citizens. The government maintains that the core mission of the institute remains the same. Ian Hogarth, chair of the institute, emphasized that the focus has always been on security and that the new criminal misuse team and partnership with the national security community.