Security Awareness: Governing the Acceptable Usage of AI Technologies

Article Online Training

AI Usage and Governance

With the ongoing introduction of additional Artificial Intelligence (AI) tools, organizations must carefully assess the extensive opportunities and potential risks that come with employees’ utilization.

Generative AI tools, like Chat GPT, focus on creating content, such as text, code, images, or videos. ChatGPT analyzes large data sets, learns the patterns and features of the data, and generates new content. When using ChatGPT in a business setting, one major concern revolves around the potential sharing of sensitive or confidential information.

Although it may be convenient to input data into a tool and receive personalized responses within seconds, if this process involves transmitting customer information or organizational data, you could be simultaneously exposing information to a third party as that information becomes part of the larger data set. This exposure can lead to risks including compliance violations, invasion of customer privacy and confidentiality, and increase the risk of data breaches and unauthorized access.

It is important for organizations to proactively review how tools can be used internally and ensure new technologies are being used in a way that aligns with organizational goals and is compliant with privacy and consumer protection regulations.

A recent report from ISC2 shared that only 27% of cybersecurity professionals said their organizations have a formal policy in place to govern the safe and ethical use of AI, and just 15% of organizations have a formal policy on securing and deploying AI technologies.

A defined policy can help set clear guidelines and rules for how generative AI tools can be used and highlight any legal or compliance standards that must be considered. Your policy should develop a standard approach for governing employee use of generative AI and include:

  • Acceptable use
  • Acceptable technologies/approved applications and third-party relationships
  • How to safeguard intellectual property and/or sensitive data
  • Privacy considerations
  • Consequences for policy violations

It is also critical to ensure employees are aware of and understand the evolving risks associated with the use of AI, thereby preventing inadvertent exposure of sensitive organizational information while exploring the potential benefits of available tools. Ongoing training for employees should also provide guidance on preventing associated risks, including plagiarism/copyright infringement, fraud, or potential code vulnerabilities. When using any AI tool, there is always a need for human oversight to confirm the accuracy of information and review produced content for any potential biases.

Many organizations and employees are already using AI in some fashion or will likely adopt the technology soon. Creating and adhering to an AI policy that covers compliance, ethics, security, and acceptable use will not only set parameters around usage but will also ensure your organization does not create legal or regulatory compliance concerns. Providing awareness training to your staff is also critical to ensure employees understand the dos and don’ts of using AI.

CampusGuard’s updated Information Security Awareness course for 2024 has been enhanced with a new training module aimed at educating end users about the risks and best practices associated with evaluating AI tools. Contact your dedicated CampusGuard team to request demo access to introduce this new module to your staff.


About the Author
Katie Johnson

Katie Johnson


Manager, Operations Support

As the manager of Operations Support, Katie leads the team responsible for supporting and delivering CampusGuard services including online training, vulnerability scanning, and the CampusGuard Central® portal. With over 15 years of experience in information security awareness training, Katie is also the Product Lead for CampusGuard’s online training services. As a Senior Customer Relationship Manager for a limited number of customers, Katie assists organizations with their information security and compliance programs and is responsible for coordinating the various teams involved.