The government said it will only legislate to address AI risks if it “determined that existing mitigations were no longer adequate and we had identified interventions that would mitigate risks in a targeted way”; if it was “not sufficiently confident that voluntary measures would be implemented effectively by all relevant parties and if we assessed that risks could not be effectively mitigated using existing legal powers”; and if it was “confident that we could mandate measures in a way that would significantly mitigate risk without unduly dampening innovation and competition”.
The government reiterated its intention to manage risk at the frontier of AI development and to continue to address this risk through international coordination, building on landmark agreements it forged last year – including one between leading AI developers and governments in 10 jurisdictions which provide for government testing of next-generation AI models before and after they are deployed. In this regard, it acknowledged that a context-based approach to AI regulation “may miss significant risks posed by highly capable general-purpose systems and leave the developers of those systems unaccountable” and said it expects “all jurisdictions will, in time, want to place targeted mandatory interventions on the design, development, and deployment of such systems to ensure risks are addressed”.
Among a suite of broader initiatives the government says it is pursuing to address AI-related risks currently, it says it is working closely with the Equality and Human Rights Commission (EHRC) and ICO to develop new solutions to address bias and discrimination in AI systems ; that it is considering developing a new code of practice for cybersecurity for AI, based on National Cyber Security Center (NCSC) guidelines; and would shortly open a call for evidence in relation to AI-related risks to trust in information, to address issues such as ‘deepfakes’.
The government added that it could also in the future require suppliers of AI products and services to meet minimum good practice standards if they wish to win public contracts.
Technology law expert Sarah Cameron of Pinsent Masons said: “Finding the right balance between regulation for emerging risks while avoiding new rules having a dampening effect on regulation is difficult. However, while the government has signaled its intention to consult further in a number of areas, it is under pressure to act quickly – with members of a Lords committee just last week stressing that the importance of international collaboration on AI regulation must not hold up national policymaking .”
“The government’s non-statutory context-based approach to AI regulation stands in stark contrast to the broad risk-based approach to AI regulation being pursued under the EU AI Act, which businesses operating in the UK will need to familiarize themselves with too if operating on a cross-border basis. “Given what the government has said in its response paper, the first form of legislative intervention specific to AI in the UK could come in respect of risks arising with highly capable generative AI systems and be targeted at a small number of providers of such systems,” Cameron said.
“Also unlike in the EU, where a bespoke new AI liability law is proposed as part of broader product liability reforms, the UK government is at this stage only exploring how existing liability frameworks and accountability through the value chain applies in the context of AI, with no immediate prospect of reform in this area,” she said.