Categories

Tag Cloud

AI and Cybersecurity: What you need to know.

Understanding the risks & benefits of using AI tools.

Ignited by the release of ChatGPT in late 2022, artificial intelligence (AI) has captured the world’s interest and has the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed in a safe and responsible way, especially when the pace of development is high, and the potential risks are still unknown.

As with any emerging technology, there’s always concern around what this means for security. This guidance is designed to help managers, board members and senior executives (with a non-technical background) to understand some of the risks – and benefits – of using AI tools.

This image was generated using Copilot, Microsoft’s ‘AI-powered digital assistant’.

What is artificial intelligence?

One of the most notable recent AI developments has come in the field of generative AI. This involves AI tools that can produce different types of content, including text, images and video (and combinations of more than one type in the case of ‘multimodal’ tools). Most generative AI tools are geared towards specific tasks or domains. For example, ChatGPT effectively allows users to ‘ask a question’ as you would when holding a conversation with a chatbot, whereas tools such as DALL-E can create digital images from natural language descriptions.

It appears likely that future models will be capable of producing content for a broader range of situations, and both Open AI and Google report success across a range of benchmarks for their respective GPT-4 and Gemini models. Despite this broader applicability, there remains no consensus on whether artificial general intelligence – the dystopian vision of the future where an autonomous system surpasses human capabilities – will ever become a reality.

How does AI work?

Most AI tools are built using machine learning (ML) techniques, which is when computer systems find patterns in data (or automatically solve problems) without having to be explicitly programmed by a human. ML enables a system to ‘learn’ for itself about how to derive information from data, with minimal supervision from a human developer.

For example, large language models (LLMs) are a type of generative AI which can generate different styles of text that mimic content created by a human. To enable this, an LLM is ‘trained’ on a large amount of text-based data, typically scraped from the internet. Depending on the LLM, this potentially includes web pages and other open source content such as scientific research, books, and social media posts. The process of training the LLM covers such a large volume of data that it’s not possible to filter all of this content, and so ‘controversial’ (or simply incorrect) material is likely to be included in its model.

Why the widespread interest in AI?

Since the release of ChatGPT in December 2022, we’ve seen products and services built with AI integrations for both internal and customer use. Organisations across all sectors report they are building integrations with LLMs into their services or businesses. This has heightened interest in other applications of AI across a wide audience.

The NCSC want everyone to benefit from the full potential of AI. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way. Cyber security is a necessary precondition for the safety, resilience, privacy, fairness, efficacy and reliability of AI systems.

However, AI systems are subject to novel security vulnerabilities (described briefly below) that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase of an AI system, but throughout its lifecycle.

It is therefore crucial for those responsible for the design and use of AI systems – including senior managers – to keep abreast of new developments. For this reason, the NCSC has published AI guidelines designed to help data scientists, developers, decision-makers and risk owners build AI products that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.

What are the cybersecurity risks in using AI?

Generative AI (and LLMs in particular) is undoubtedly impressive in its ability to generate a huge range of convincing content in different situations. However, the content produced by these tools is only as good as the data they are trained on, and the technology contains some serious flaws, including:

  • it can get things wrong and present incorrect statements as facts (a flaw known as ‘AI hallucination’)
  • it can be biased and is often gullible when responding to leading questions
  • it can be coaxed into creating toxic content and is prone to ‘prompt injection attacks’
  • it can be corrupted by manipulating the data used to train the model (a technique known as ‘data poisoning’)

Prompt injection attacks are one of the most widely reported weaknesses in LLMs. This is when an attacker creates an input designed to make the model behave in an unintended way. This could involve causing it to generate offensive content, or reveal confidential information, or trigger unintended consequences in a system that accepts unchecked input.

Data poisoning attacks occur when an attacker tampers with the data that an AI model is trained on to produce undesirable outcomes (both in terms of security and bias)