Security Awareness: Governing the Acceptable Usage of AI Technologies

Article Online Training

February 26, 2024

AI Usage and Governance

With the ongoing introduction of additional Artificial Intelligence (AI) tools, organizations must carefully assess the extensive opportunities and potential risks that come with employees’ utilization.

Generative AI tools, like Chat GPT, focus on creating content, such as text, code, images, or videos. ChatGPT analyzes large data sets, learns the patterns and features of the data, and generates new content. When using ChatGPT in a business setting, one major concern revolves around the potential sharing of sensitive or confidential information.

Although it may be convenient to input data into a tool and receive personalized responses within seconds, if this process involves transmitting customer information or organizational data, you could be simultaneously exposing information to a third party as that information becomes part of the larger data set. This exposure can lead to risks including compliance violations, invasion of customer privacy and confidentiality, and increase the risk of data breaches and unauthorized access.

It is important for organizations to proactively review how tools can be used internally and ensure new technologies are being used in a way that aligns with organizational goals and is compliant with privacy and consumer protection regulations.

A recent report from ISC2 shared that only 27% of cybersecurity professionals said their organizations have a formal policy in place to govern the safe and ethical use of AI, and just 15% of organizations have a formal policy on securing and deploying AI technologies.

A defined policy can help set clear guidelines and rules for how generative AI tools can be used and highlight any legal or compliance standards that must be considered. Your policy should develop a standard approach for governing employee use of generative AI and include:

  • Acceptable use
  • Acceptable technologies/approved applications and third-party relationships
  • How to safeguard intellectual property and/or sensitive data
  • Privacy considerations
  • Consequences for policy violations

It is also critical to ensure employees are aware of and understand the evolving risks associated with the use of AI, thereby preventing inadvertent exposure of sensitive organizational information while exploring the potential benefits of available tools. Ongoing training for employees should also provide guidance on preventing associated risks, including plagiarism/copyright infringement, fraud, or potential code vulnerabilities. When using any AI tool, there is always a need for human oversight to confirm the accuracy of information and review produced content for any potential biases.

Many organizations and employees are already using AI in some fashion or will likely adopt the technology soon. Creating and adhering to an AI policy that covers compliance, ethics, security, and acceptable use will not only set parameters around usage but will also ensure your organization does not create legal or regulatory compliance concerns. Providing awareness training to your staff is also critical to ensure employees understand the dos and don’ts of using AI.

CampusGuard’s updated Information Security Awareness course for 2024 has been enhanced with a new training module aimed at educating end users about the risks and best practices associated with evaluating AI tools. Contact your dedicated CampusGuard team to request demo access to introduce this new module to your staff.

Share

About the Author
Katie Johnson

Katie Johnson

PCIP

AVP Product and Senior Manager, Operations Support

With over 20 years of experience in information security and awareness training, Katie leads CampusGuard's product and software teams, including our Online Training, Phishing Simulator, CampusGuard Central Portal, and the GRC Platform. Katie is responsible for product planning, roadmap execution, business systems ownership, cross-functional coordination, and day-to-day oversight of product-related initiatives. She also manages the teams responsible for operational support, online training delivery, and vulnerability scanning.

Related Content