UK Public sector AI use requires accountability to protect fundamental rights.
In January we responded to the Public Accounts Committee inquiry on Use of AI in Government. Today they've published their final report, so what does it say?
The Committee has called for the right balance to be struck between "industry support and ethical use" given we know the Government is seeking to scale up AI adoption in the public sector. For us, this is a hugely concern given the gap left in the regulatory environment on AI safety, while the Data Use and Access bill makes its way through Parliament and the UK AI Bill is non-moving. We told the Committee that to prevent unintended individual, collective or societal harm, public sector AI use needs clear redlines, regulator scrutiny and transparency, and crucially, should not be deployed without these guardrails.
We urged the Committee to fully engage with the wide-ranging risks concerning the use of AI in Government. AI and automated decision-making can facilitate human rights violations and exacerbate societal power imbalances, disproportionately affecting Black people and other marginalised communities who already face discrimination.
Did they listen? Sort of.
The PAC focussed their inquiry on how public trust is being jeopardised by the lack of transparency and robust standards for AI adoption in the public sector. They called on the Government to "address public concerns over the sharing of sensitive data in AI’s use" and they called for better compliance with the Algorithmic Transparency Recording Standard (to provide greater transparency on algorithm-assisted decision-making).
While we're supportive of increasing transparency in this way, we are still deeply concerned at the lack of regulation - including frameworks for compliance with existing laws - and the complete lack of infrastructure for AI accountability.
Building trust with the public should extend further than narrow considerations of data protection. To build trust with the public, the Government needs to engage meaningfully in participatory policy making where ‘the public’ has a lever of power to refuse their data being used in AI tools. This would actually be an approach to protect our fundamental rights and ensure consistent auditing and public oversight?
We called on the Committee to recommend that the Government develop a coherent and structured impact assessment approach, auditing and public oversight to support the use of AI in Government. Unfortunately the Committee stopped short of putting this recommendation forward and instead only called for better transparency through recording.
The Committee did call for DSIT, in collaboration with the Cabinet Office, to set out its proposed AI sourcing and procurement framework publicly to ensure specific objectives. We were pleased to see tackling monopoly market power and over-reliance on specific services as part of this, but disappointed not to see the recommendations that would protect fundamental rights, as this should be central to procurement decisions.
As the Government rolls out the ‘AI Opportunities and Action Plan’, we are deeply concerned at the Government's approach, or lack thereof, to accountability and human rights.