In the UK's plans for AI, Brussels still looms large
The new British government plans to regulate powerful AI models. But it should also influence how European authorities implement their law on AI and help shape global norms on AI regulation.
The UK and the EU both suffer from sluggish economic growth. With a shrinking workforce, opposition to more migration and higher energy prices than in the US and China, the UK and the EU will both need to rely on productivity growth to boost the economy. That will require much greater investment in the deployment of technology. Artificial intelligence (AI) is a technology with the potential to boost productivity.
The technology also comes with risks – such as the potential to produce misinformation and discriminatory outcomes. The EU took a more cautious approach to these risks than the last UK government. EU law-makers enacted an overarching AI Act to help manage the technology’s risks while British Conservatives did not. The new Labour government has announced, however, that it will follow the EU and start the process of designing a British law for AI.
Some businesses claim the EU’s AI Act might dampen investment in the Union. If regulation in the UK had a similar effect, this could prove costly: while the EU has just a handful of large AI firms, the UK is one of the largest AI markets in the world after the US and China, with about 3,000 firms active in developing AI, generating £10 billion a year in revenues. Labour will probably pay careful attention to the impacts of a new law on these companies. But the EU’s approach, and the approach of other countries, will nevertheless impact many British firms. That means Labour must try to influence how EU authorities implement their AI Act and help shape global AI regulation.
Which systems and services should the UK regulate?
In designing a UK law for AI, the first question for Labour is whether to regulate all companies that fall under the EU’s regulation. The EU law covers two types of firms. First, it imposes rules on firms that develop or use AI systems as part of their business. Second, it regulates firms that develop general purpose AI models like OpenAI’s GPT. In its manifesto and the recent King’s Speech, Labour only sought to regulate the second group of firms, promising rules for the “handful of companies developing the most powerful AI models”.
If the UK adopts its own law, it should follow the EU approach of defining which general purpose AI models are so advanced that they pose special risks. Regulating the most powerful models has sound objectives. Those objectives have broad support, and reflect principles agreed in international discussions like the G7 Hiroshima AI Process, the UK AI Safety Summit, the EU-US Trade and Technology Council (TTC), and President Biden’s Executive Order on AI. For example, many countries agree that firms providing powerful models should be transparent about how the models work and take steps to ensure the models are safe. The EU's AI Act simply makes these obligations more concrete. Voluntary standards have proven insufficient: for example, several providers of large AI models failed to comply with the previous UK government’s attempt at voluntary self-regulation.
Furthermore, the UK will need to ensure rules covering general purpose AI models are compatible with the EU approach. The EU’s rules on general purpose AI models may achieve the ‘Brussels effect’ – meaning that providers of the most advanced models will comply with the law globally rather than creating distinct models or ways of doing business in the EU. Developers of large models generally want those models to be used widely around the globe, to maximise their take-up and because many models improve with more user feedback. That raises a question about whether UK rules are necessary at all. But it also means that if EU rules for the most powerful AI models have a negative impact on innovation, there is at least not too much harm in the UK following suit.
Alignment with the EU to ensure the same AI models are regulated will not be straightforward, however. The EU AI Act has a complex set of rules to determine which powerful AI models should be subject to the strictest regulatory provisions, and the European Commission has broad discretion to decide which models to regulate. This creates regulatory uncertainty, which the UK should try to avoid. The thresholds for the most stringent requirements in the EU could also capture a large number of existing models. The UK should avoid this uncertain approach, and adopt higher but clearer thresholds – sticking to its stated intention of only capturing a handful of today’s models. Otherwise, the UK may inadvertently end up regulating more models than Brussels does.
The UK does not need new regulation on how AI is used – it should rely instead on existing laws to protect against risks.
Regulating general purpose AI models is one thing. The UK is probably right, however, not to follow the EU in regulating all firms that design or deploy AI systems. There is not much international consensus about which uses of AI pose the most risk and how to regulate those uses. While the EU’s rules for general purpose AI will be globally influential, rules on deploying AI will have less global clout, because uses of AI can be local or firms can adapt how they use AI in different countries. Apple, for example, has delayed the rollout of certain AI features in the EU. Abstaining from regulating firms that deploy AI – and relying on existing laws (such as those covering cybersecurity, employment and equality) to protect against misuse of AI – could help the UK could remain more attractive than the EU as a place to experiment and rollout new uses of the technology.
What rules should apply?
A second question for the UK government is what rules to impose on developers and providers of the most advanced general purpose AI models. Again, the general types of obligations imposed in the EU – requiring safety testing, reporting and ensure the cybersecurity of models – are broadly sensible. These are the same types of obligations the UK government says it wants to impose.
The UK government also sensibly wants firms developing the most powerful models to share their safety testing results with the UK’s AI Safety Institute (AISI) and ensure these tests have “independent oversight”. This approach would change AISI’s current role, where it directly tests models itself, and instead follows the model of the EU’s AI Act: it puts the burden on developers of AI models which pose ‘systemic risks’ to assess and mitigate those risks with the authorities supervising the process. This approach is the right one for several reasons.
First, nobody knows yet how to effectively test AI systems: many risks of AI are unknown and AI systems often fail in unpredictable ways. But the developers of AI systems are likely to better understand and identify risks than a public regulator.
Second, it reduces the need for public authorities like the AISI to attract employees with AI skills – of which both the UK and the EU have a massive shortage – and will allow UK and EU authorities to share expertise and findings, reducing each of their workloads. It would also help the UK and EU co-operate with the US AI Safety Institute, whose mandate is also focused on supervision rather than testing itself.
Third, the EU approach – if implemented well – could be relatively proportionate. For example, the law should not require developers of AI models to hand over as much information about their models to public authorities, compared to a situation where public authorities were directly testing those models. The EU approach also requires providers of certain models to ‘mitigate’, rather than eliminate, risks and to ensure an ‘adequate’ level of cybersecurity. These obligations do not directly impose detailed or specific requirements – so they could help ensure a sensible balance between safety and innovation.
How Britain can influence the EU’s regulatory approach
The design of any innovation-friendly UK law on AI will therefore depend heavily on the EU AI Act, which many British AI firms will have to follow anyway. However, the EU now faces a Herculean task in working out how to implement its complex new law. This gives the new UK government an opportunity to influence how the AI Act will work in practice.
One way British interests can be reflected is in the development of technical standards. The AI Act is replete with vague obligations, but allows industry to take the lead in translating these requirements into workable technical rules. Once these technical standards are approved by the Commission, firms will be presumed to comply with the AI Act if they follow the approved standard. Standard-setting bodies are open to all firms that want to participate – including non-European ones – and they typically reach decisions by consensus. This means decision-making is slow, but standards are credible and objective. Consequently, EU standards have immense global influence: for example, 81 per cent of standards set by the EU’s standard-setting body CENELEC are identical to global standards. The UK should ensure that British firms have a seat at the table when those standards are being created. One of the previous UK government’s achievements in AI was creating the AI Standards Hub, a coalition to help the UK’s AI community contribute to international AI standard-setting. The current government should leverage that initiative and ensure it contributes to the EU’s work on standard-setting. The UK could then design its law so that a firm which complies with EU standards would be deemed compliant with any UK law on AI.
One problem with standards, however, is that they take a long time to be produced and then amended. Consensus may prove difficult to achieve, given the fast-moving nature of the technology and the risk that technical rules would act as a ‘straitjacket’ on innovation. The EU aims to solve this problem by encouraging the development of ‘codes of practice’. These are meant to be a ‘stopgap’ before full standards are prepared, and are supposed to be in place in 2025 when the rules for general purpose AI models take effect.
The UK government should prioritise ensuring that British stakeholders can help shape the development of the EU’s AI codes of practice.
The AI Act does not provide much clarity about how codes of practice will be drafted, however. The EU’s AI Office initially sought to prioritise speed by having providers of AI models draft the codes, without much input from other stakeholders. However, under pressure from civil society, the Office is now reportedly making the process somewhat more participatory. The UK government should take advantage of this change – and ensure British AI firms, and other UK stakeholders like the UK AISI, help shape the development of codes of practice. These codes will be an important starting point for EU standards, and will therefore have long-term influence over how the EU AI Act is implemented, and how global norms in AI regulation are shaped.
Conclusion
In some areas, regulation may help boost take-up of AI. For example, the UK and EU are both suffering a digital skills crisis. Both the EU and UK could therefore consider following the US, where the Federal Trade Commission has tried to ensure large tech firms cannot lock-in skilled employees and thus hinder competition for AI talent. Ensuring employees with AI skills can change jobs could help ensure that British and European firms are able to hire staff they need to invest in the technology.
But regulation which imposes costs on the use of AI – even if it is justified to help manage the technology’s risks – is riskier. The new UK government will face some difficult choices if it does pursue regulation. But, as in many other areas, Britain will likely find that EU regulations have a powerful and enduring impact on UK firms, notwithstanding Brexit. The new Labour government is rightly focused on building bridges with Brussels. Britain should use that goodwill to ensure the EU AI Act is implemented in a clear and proportionate way.
Zach Meyers is assistant director at the Centre for European Reform.
Add new comment