Security of large language models (LLMs) - UK Parliament invites NCC Group’s Chris Anley as expert witness

21 September 2023

Earlier this week, NCC Group’s Chief Scientist, Chris Anley gave evidence to the UK House of Lords Digital and Communications Committee about the cyber security risks governments should consider when it comes to the regulation of large language models (LLMs) and other forms of generative AI.

Through its latest inquiry into LLMs, the Committee is examining what needs to happen over the next 1–3 years to ensure that the UK can respond to the opportunities and risks presented by generative AI, improving safeguards while promoting innovation. Earlier this month, NCC Group made a written submission to the inquiry, addressing a number of key risks governments should consider when it comes to the regulation on LLMs.

As an expert in artificial intelligence (AI) and machine learning (ML), Chief Scientist, Chris Anley, was invited to attend Parliament and appear before the Committee to support them in their work scrutinising government policy, and recommending additional actions. During the recorded discussion, the Committee was interested in experts’ views on important topics such as:

  • Whether LLMs present an existential or limited risk
  • Immediate risks, such as the use of generative AI tools in cyber attacks
  • What (more) government, regulators and industry should do to address those risks

Chris used the opportunity to highlight that while the verifiable increase in cyber risk as a result of generative AI is ‘small to moderate’, the wider fast-evolving cyber threat landscape, particularly when it comes to ransomware and supply chain risk, means that the ‘increase is noteworthy and worth monitoring.’ In addition, he also brought the Committee’s attention to the need to consider the cyber security of the generative AI models themselves, noting that, with the right inputs, malicious actors could cause models to leak sensitive data or trick the models into returning false outputs.

Responding to questions from the cross-party group of Peers about what the government should do next to address the risks of AI and LLMs, Chris recommended that the government promote ‘third-party validation, transparency of processes, and benchmarking of the performance of safety and security measures’, as well as addressing security vulnerabilities inherent in AI models.

He emphasised the importance of taking stock of what is already out there and bringing this together into a coherent approach from the government’s AI White Paper, to the CMA’s foundational tests, to existing NCSC guidance, and the work of the new AI Frontier Taskforce. He said that if we find this approach is not enough, the UK will have to ask itself the question, together with its international partners, of ‘whether we need to move from regulating AI as a product to treating it as an offensive capability that needs to be controlled accordingly.’

What’s next?

Once it has concluded its evidence gathering phase, the Digital and Communications Committee will use the insights gained from experts like Chris to draft and publish a report. This will include recommendations for the UK Government on the best way forward.

NCC Group looks forward to continuing work in this area to ensure better security outcomes in support of our purpose to create a more secure digital future.

You can watch the full parliamentary discussion online here: https://www.parliamentlive.tv/Event/Index/c6cee54f-6516-4210-9959-37f75ec6cb71

Contact

NCC Group Press Office

All media enquires relating to NCC Group plc.

press@nccgroup.com

+44 7721577574