Why having an AI Policy is important for your organization
Table of contents
By now, everyone has heard about Generative AI (GenAI) tools such as the popular ChatGPT or Microsoft’s own Copilot. While we all heard stories about students using them a bit too much during their final exams, not enough information workers are aware of the potential dangers of using them with corporate data.
While creating an AI Policy template in a blog post that works for every organization is almost impossible, in this blog post, I will attempt to share with you some of the most important parts to include and, most importantly, the potential dangers if you do not govern the use of AI inside your organization. Let’s get started!
Why do you need an AI policy?
Your first question might be, why do we even need to bother with an AI policy? I mean, don’t we have enough policies that our users need to follow? Why do you want to add another one? Following the popularity of AI tools in the consumer space, most information workers now want to use GenAI tools for work. I mean, who can blame them? But did you know that using the free service can share your data with the Generative AI Service?
Take a look at ChatGPT’s terms below. The privacy terms clearly state that in services for individuals (such as the free version or plus subscriptions), the content provided to ChatGPT can be used to train the model.
We can find a similar clause in the Copilot for consumers’ terms and conditions.
We often hear the phrase, “If you’re not paying for the product, you are the product,” and that applies to free Generative AI solutions. So, what’s the worst that can happen (in a very simple and exaggerated example)?
- Alex from the R&D department is working on a new patent on solving Problem A. Before sending it to their manager, they ask ChatGPT to spell-check it. Alex just opened a free ChatGPT account and heard how everyone uses it to spell-check it.
- ChatGPT does an amazing job of spell-checking, and at the same time, it learns how to solve Problem A—something the model did not know before.
- Vlad from another company is also wondering how to solve problem A and decides to ask ChatGPT if it has any ideas. Based on the data from Alex, ChatGPT now knows how to solve it and gives Vlad the solution. Vlad realizes that no one has talked about this before and does a blog post and video about it. This can stop Alex’s company from patenting it as it’s now public information.
Now, granted, this is a worst-case example, but it gives you an idea of how sharing any intellectual property with a Generative AI tool without the proper privacy requirements can harm your organization.
How to use GenAI safely?
Of course, there are ways to use Generative AI safely for your work data, but they often come at a cost. For example, ChatGPT offers its Team and Enterprise plans, which exclude your data from training the model by default.
Microsoft also offers Enterprise Data Protection for Copilot for Microsoft 365 and Commercial Data Protection for Microsoft Copilot for users on specific licenses. Recently, Microsoft also announced Updates to Microsoft Copilot to bring enterprise data protection to more organizations, extending Enterprise Data Protection to Copilot for all Entra ID accounts, which provides multiple benefits, including not training the models on your organization’s data.
What to keep in mind when creating AI policy
The first thing you need to do is train your users. As IT Professionals, we understand privacy, compliance, and all those terms and their implications, but most users do not. Make sure you teach them the differences and the approved AI solutions that they can use.
You should, of course, also create policies about it, bringing us to the topic of this post and explaining to users in clear terms what they are allowed to do and not do. For example, your generative AI policy could include things such as:
- Only log in to generative AI tools using their organizational email or credential,
- Ask users to opt out of data sharing with the tool’s vendor,
- Ask users to not input the following categories of data inside GenAI tools unless explicitly allowed to:
- Source code
- Personally Identifiable Information
- Passwords, application programming interface (API) keys, and other secrets
- Any confidential information
Make sure you customize your AI policy to your organization. For example, it might be safe for users to share source code when using GitHub Enterprise, but they should not share it with any other tool.
You can also take advantage of certain security settings just in case users do not follow your policy. For example, Microsoft offers documentation on how you can Require commercial data protection in Copilot , which prevents users from signing in to Microsoft Copilot with a personal Microsoft account (MSA). With the upcoming change to Copilot for Entra ID users going to a different URL, you could potentially block the consumer URL through your organizational proxy.
AI policy and training are essential
Generative AI tools can be powerful allies in getting work done faster and better. However, they can also bring certain risks when used inappropriately. It’s important that your organization has an AI Policy and AI training so that users know what tools they are allowed to use for what data and understand the impacts of providing IP to tools that are not allowed. When the policy fails, IT administrators can also create certain policies in their internet proxy or group policies to force users toward the right tools.