The Week

The Week 22 September 2023

Hashmath Hassan
Researcher

The headlines this week have been dominated by debates triggered by the Prime Minister’s decision to delay some key net zero targets. Here at Reform, though, we’re looking at some of the policy developments that may have flown under your radar, and exploring the tension between autonomy and security in the world of digital regulation.

The Home Office has urged Meta to rethink its plans to roll out end-to-end encryption on Instagram and Facebook Messenger because it may inadvertently protect child predators online — insisting that the tech giants install safety measures such as technology to scan encrypted messages. Tech companies, on the other hand, consider this to be fundamentally incompatible with their commitments to user privacy and security.

On Monday, the Competition and Markets Authority (CMA) published its much anticipated review of AI Foundation Models — a type of AI trained on vast amounts of data such as ChatGPT. This 129-page document is a balanced report that identifies the immense potential of foundational models, and concludes that regulation is not required (at this time) as it might stifle innovation.

At the same time, the review warns of the real risk of the market being dominated by a few players. Only big tech companies will have access to large volumes of expensive proprietary data sources, such as academic journals. With weak competition, consumers could be “exposed to significant levels of false information, AI-enabled fraud, or fake reviews” resulting in shoddy products and services charged at higher prices.

The CMA presents a list of very sensible principles to guide upcoming developments (such as sustaining a diverse set of business models including both open and closed source foundational models to ensure lower barriers to entry). However, despite calls from some experts, the CMA has stopped short of recommending further regulation. The AI space is rapidly changing, making it incredibly difficult to regulate. The watchdogs are aiming to publish an update in early 2024 — one to watch out for.

The government also announced a new advisory service which will provide business-tailored advice on how to meet regulatory requirements for digital technology and AI.  This pilot scheme will launch next year backed by £2 million of government funding and will deliver on commitments made as part of the government's AI Regulation white paper in April. The scheme will bring together regulators from a range of sectors to provide a more collaborative regulatory approach. This is sensible as it would allow for regulation to keep pace with rapidly evolving technology.

Still missing, though, is any sign of a single, unified AI regulator. Given the risks posed by these technologies, advocates of a single regulatory body say it could increase the UK’s resilience by providing a holistic approach (similar to that of the EU) and avoiding gaps that may arise between regulators.

On to the read of the week…

We’ve been looking at the results of the British Social Attitudes (BSA) survey that was published yesterday. This year marks 40 years of the BSA which gives us a useful picture of how the public’s views on government and policy have changed over time. The report states that alongside ideological preferences, voter attitudes have also been affected by the impact of crises — sometimes in fascinating ways. For instance, “after the financial crash, 41% said the government should definitely reduce income inequalities and now, after COVID-19 and the cost-of-living crisis, 53% feel that way”. Meanwhile, satisfaction with the NHS has fallen again, to a new record low of only 29%. In 2010, that number was 70%.