Skip to navigationSkip to contentSkip to footerHelp using this website - Accessibility statement
Advertisement

Brussels’ rule-setting for AI isn’t pretty, but someone’s got to do it

AI is a vast technology and its potential to change societies is unknown. Someone needs to be thinking and legislating about how its power can be channelled for good.

Alan Beattie

It’s a much-stereotyped instinct in Brussels: see a dynamic industry, rush to regulate it. Like the Tyrannosaurus rex in Jurassic Park, EU officials hunt by movement. If part of the economy is growing fast, they’re rapidly on its tail.

Their latest prey is artificial intelligence, on which the EU member states and parliament agreed an outline AI Act last week. The regime seems likely to be classic EU stuff: generally well-motivated in principle, but highly complex in practice.

The European Parliament in Brussels. The EU still often sets the regulatory pace even in sectors where its domestic industry is undersized.  querbeet

When France’s president is criticising you for excessive regulatory zeal, as did Emmanuel Macron this week, you might want to pause for reflection.

The episode underlines, though, that the EU still often sets the regulatory pace even in sectors where its domestic industry is undersized, simply because it’s prepared to think and act systematically where other large jurisdictions are not.

The General Data Protection Regulation, the EU’s data privacy law passed in 2016, was similarly criticised for creating administrative burdens that advantaged incumbent tech giants with big compliance departments over dynamic start-ups.

Advertisement

Some of those critiques were no doubt reasonable, and GDPR certainly hasn’t facilitated a world-beating EU tech sector. But it’s still the closest we have to an international data protection standard, providing inspiration (if not quite a copy-paste template) for regulation across the world.

It transpires that the “Brussels effect”, where EU rules set global standards, doesn’t necessarily require Europe to have large competitive companies in the relevant market.

To be fair to the EU, judging the correct balance of risk for rules on AI is massively uncertain compared with the traditional sectors where the Brussels effect holds, such as chemicals and cars. Even AI’s creators have vastly different opinions: a large group of researchers and industry figures in April called for a moratorium on its development while the hazards were assessed.

Even if the EU isn’t necessarily the optimal organisation to regulate AI in theory, it is probably the best available so far in practice. (“Faute de mieux”, as they say in the Elysee Palace.) China, although its companies are far ahead of the EU’s in developing AI, is far too much of a surveillance state for its rules on technologies such as facial recognition to be taken as exemplars.

In the US, as with data protection, there is some legislation on a state level, but the Biden administration has so far limited itself to issuing a vague “AI Bill of Rights” and an executive order that is more about guidance and reporting requirements than tough binding law.

The administration’s attitude to tech governance is unclear and hence the US regulatory environment is unstable. The White House has just dismayed the industry by ditching the longstanding US policy of using trade deals to liberalise cross-border data flow.

Advertisement

Tech companies complain about EU bureaucracy – Meta’s social media app Threads is launching in the EU this week five months after its US inception, thanks to data-sharing issues created by the EU’s Digital Markets Act – but it’s a devil they know.

Smaller economies such as the UK also have ambitions to set standards. Rishi Sunak’s government, always desperate to show the benefits of escaping the EU’s regulatory stockade with Brexit, has been touting its own looser approach to AI laws rather than what it calls the “clunky” Brussels version.

But how much the UK can diverge in practice, given how closely its tech ecosystem is intertwined with the EU, is not clear.

In practice, it’s possible that the EU’s AI legislation won’t provide a global model in quite the way GDPR does. Because multinationals want to transfer personal data across borders, there’s a strong incentive for the EU’s trading partners to implement interoperability, if not full harmonisation with GDPR.

The UK’s room to embark on a libertarian adventure in data regulation, for example, is circumscribed by its need to maintain the EU’s “adequacy decision”, deeming its data protection sufficient for personal information to be transferred between the two. In the case of AI, companies can run different algorithms in different jurisdictions with fewer incentives to standardise.

At the very least, though, EU regulation will provide an anchoring or triangulation point to which other governments can refer when creating their own rules.

Advertisement

AI is a vast new technology and its potential to change economies and societies is not known. Someone needs to be thinking and legislating methodically about how its power can be channelled for good.

The EU has had the first stab at doing so of any major jurisdiction. If the US or anyone else wants to have a go, they are welcome to try.

Read More

Latest In Europe

Fetching latest articles

Most Viewed In World