Optiv Cybersecurity Dictionary

What is AI Governance?

Artificial intelligence (AI) governance involves a set of processes, tools and frameworks that guide the ethical development, deployment and use of AI technologies and align with an organization’s vision, strategy, principles, policies and standards. 

 

AI governance frameworks offer a structured approach to navigating ethical considerations of AI, enabling its accountability and transparency, thereby building confidence and trust in AI technologies. Relevant frameworks include NIST Artificial Intelligence Risk Management Framework (AI RMF), OWASP AI Security and Privacy Guide or the OWASP Top Ten for Large Language Model Applications and MITRE ATLAS.

 

The key AI governance actions include:

 

  • Building AI literacy in the organization by investing in training and education and equipping employees with the concepts of AI, how AI systems use data, how AI applies to them, how to use it responsibly and how all aspects intertwine
  • Writing AI policies covering use, risks and controls that align with your strategy and apply internally and to supply chains
  • Determining how to maintain accountability
  • Building AI governance committees
  • Implementing risk registers for each AI project to track and manage potential risks

Contact Us

 

Would you like to speak to an advisor?

How can we help you today?

Image
field-guide-cloud-list-image@2x.jpg
Cybersecurity Field Guide #13: A Practical Approach to Securing Your Cloud Transformation
Image
OptivCon
Register for an Upcoming OptivCon

Ready to speak to an Optiv expert to discuss your security needs?