NCC Group inputs into UK Parliament inquiry into large language models (LLMs)

08 September 2023

In July, the UK House of Lords Digital and Communications Committee launched an inquiry into the regulation of LLMs. The parliamentary review will consider how the UK can best respond to the opportunities and risks posed by LLMs and other forms of generative AI, before making recommendations to the UK Government on the best way forward.

In its AI White Paper published back in March 2023, the UK Government set out plans to introduce a “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative” regulatory framework, that is underpinned by five values-focused principles. The principles include: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and, contestability and redress. The Government is also due to host a Global AI Safety Summit later this year that will explore how these principles can be adopted at an international level.

The evolving use of LLMs and lack of understanding about these models’ capabilities presents new cyber security and safety risks that need to be managed to ensure the safe, ethical and trusted development of LLMs. As part of their inquiry, the Digital and Communications Committee will review whether the Government’s approach sufficiently tackles these risks and what more should be done to improve safeguards while promoting innovation.

Here, Chief Scientist, Chris Anley comments on some of the key points from NCC Group’s written submission:

We believe there are a number of risks governments should consider when it comes to the regulation of LLMs and other forms of generative AI, such as bias issues, use of models in cyber-attacks and the targeting of models by malicious cyber actors.

To address these risks, and make the most of the opportunities generative AI presents, we are urging UK policymakers to:

  • Make UK datasets more readily available for use in generative AI models, so that UK languages, religious outlooks, values and cultural references are protected, and the risk of adopting biases seen elsewhere in the world is minimised;
  • Empower end-users to make decisions about the generative AI models they use by improving transparency of where and how AI technologies are being deployed;
  • Promote the continuous assurance of LLMs throughout their lifecycle, mandating third party product validation where the risk profile necessitates;
  • Build flexibility, agility and periodic reviews into the AI regulatory frameworks from the outset, so that the UK can keep pace with technological and societal developments;
  • Strengthen regulators' powers, resources and capabilities so that they can effectively regulate the use of generative AI;
  • Develop the skills the UK needs to develop AI frameworks and assure systems' safety, security and privacy; and,
  • Publish a clear route map for the development of technical standards.

What’s next?

The Committee will now host a series of evidence sessions, inviting expert witnesses to provide their input in person and answer some of the Committee's burning questions.

We look forward to seeing how UK policymakers utilise this input to inform their important decisions on the future of AI regulation, and will continue our work in this area to ensure better security outcomes in support of our purpose to create a more secure digital future.

Contact

NCC Group Press Office

All media enquires relating to NCC Group plc.

press@nccgroup.com

+44 7721577574