AI Security and Governance: A Practical Path to Protection

January 28, 2025

AI-enabled productivity and AI-related risks are two sides of the same coin. It’s nearly impossible to have one without the other. As businesses continue to adopt generative AI (GenAI) at breakneck speed, AI risk management becomes an essential aspect of cybersecurity.

 

Adopting a holistic AI security policy will help align cybersecurity with business goals, compliance requirements and ethical concerns like bias.

 

 

Here are the building blocks of an effective AI security and governance policy:

 

 

3 Core Components of an AI Security and Governance Policy

When building AI security and governance policies, it is essential to formulate acceptable usage guidelines, AI risk management processes and protocols for managing AI-related security incidents.

 

1. Acceptable Use and Data Protection Guidelines


Your organization likely has acceptable use policies for different in-house and third-party tools deployed throughout the business. Data protection guidelines also help your business demonstrate transparency about how employee and consumer data is used. Now as more organizations are rapidly adopting AI tools, it is important to augment existing confidentiality, data security, employment and ethical policies with AI-specific guidelines.

 

Confidentiality
In a confidentiality agreement, you might want to add a clause that says, “Passwords, protected health information (PHI), personally identifiable information (PII) such as name, social security number or trade secrets must never be shared with GenAI tools.”

 

Avoid inadvertently sharing confidential information with unauthorized users by ensuring that no confidential, proprietary or sensitive business information enters GenAI tools. To facilitate this, organizations may need to implement security controls and adjust certain data sharing or storage settings within the GenAI tool menus.

 

Data Protection
Similarly, regarding data security and protection, you must only approve the usage of AI tools that are compliant with regional regulations, such as GDPR or CCPA, and industry regulations, such as HIPAA or PCI DSS. Without such an explicit policy, companies risk employees using tools that do not comply with data privacy regulations. Such non-compliance errors may lead to fines, data breaches or reputation damage.

 

Equal Employment
Tread with caution while using AI tools to automate recruitment workflows. Relying on GenAI tools that do not reveal information about the fairness and limitations of their AI/ML models may lead to biased or skewed results. Such concerns may violate “Equal Employment Opportunity” (EEO) laws.

 

AI tool algorithms are not inherently or intentionally designed to be discriminatory. However, bias can still arise because of how unstructured data gets processed or the lack of diversity in the datasets being used.

 

Workplace and Ethics
Create an AI tools approval process that frequently reviews and updates the tools that employees can use. Ask employees to check with this list before using any new AI tools.

 

You must also create processes for a team of experts to carefully review the use of AI in mission-critical workflows. Such a measure is necessary to remove ethical problems that may arise when employees falsely take credit for AI outputs or a biased AI algorithm unnecessarily excludes eligible users.

 

2. AI Risk Management Practices


Establish AI risk management processes and frameworks to evaluate third-party AI vendors and stakeholders. You will also need to create practices that promote transparent data usage and adhere to industry regulations while validating the robustness of AI/ML models. Government and industry resources have issued framework guidance to help companies navigate AI risk management. Regulatory requirements aligned to strong risk management practices are upon us as well, such as the EU AI Act and the Colorado AI Act.

 

Third-Party Risk Management
Understand the legal disclosures and safety clauses of AI tools. Learn how each tool offers data protection and data ownership rights.

 

Also, read the fine print about how each tool achieves regulatory compliance. Include AI risks and data protection in vendor and contractor agreements.

 

AI Impact Assessments
Because AI tools are black boxes, their outputs can vary over time. So, it’s important to observe and refine AI algorithms frequently. Conducting frequent and periodic AI assessments with an internal team of experts will help nip AI risks in the bud.

 

AI Incident Response
Establish clear protocols and accountable persons for known risks. Invest in the right cybersecurity tools, such as managed detection response (MDR) and endpoint security services (ESS), to detect anomalies, identify root-cause problems and fix them.

 

Redress Mechanisms
Establish internal feedback collection systems to review employee and user complaints. Consider using these feedback loops to detect unfair biases or advantages in proprietary AI solutions.

 

3. Incident Response and Reporting


Develop a robust incident reporting and reporting structure. Include protocols for managing incidents such as data breaches, model failures or adversarial attacks. Establish periodic reporting structures that gather inputs and data across business units and verticals.

 

The Optiv field guide provides actionable steps for setting realistic goals to safely leverage and build AI for unique use cases. Download today.

 

 

Customizing AI Governance Policy

Creating an AI policy is only the beginning of your AI governance journey. Finetuning the AI policy to suit your unique business requirements is a continual endeavor as AI features change alongside business needs. Here are a few tips that might help you maintain and improve your policy over time.

 

  • Perform a gap analysis to identify areas where your current AI policy is not aligned with the organization's risk management framework
  • Involve cross-functional teams across the legal, IT, HR and sales units in the policy design process to help you develop effective, relevant policies
  • Offer AI-related security training to enhance your employees’ compliance with the AI policy
  • Encourage AI project leaders to document how proprietary AI/ML systems are free from bias and explain the limitations of every model iteration
  • Address the ethical implications of generative AI, including misinformation, transparency and intellectual property concerns
  • Develop auditable metrics related to policy adherence, frequency of AI-related incidents, compliance errors with legal standards and employee awareness

 

Develop AI Security and Governance Policies with Optiv

AI security and governance policies can help your organization avoid ad-hoc individual decisions that may lead to bottlenecks, errors, inconsistent communication or undesirable outcomes. Evaluate your organizational readiness with this checklist!

 

Optiv’s team of cybersecurity experts will help develop a custom AI security policy aligned with your business goals and regulatory requirements.

 

Contact us to get support in integrating an AI policy into your broader security and risk management cybersecurity programs.

Jennifer Mahoney
MANAGER, DATA GOVERNANCE, PRIVACY AND PROTECTION | OPTIV
Jennifer Mahoney has 18 years’ regulatory compliance experience in both consulting and enterprise environments. Her experience ranges from small businesses to Fortune 50 corporations particularly in the technology, state and local, manufacturing and pharmaceutical verticals. Areas of expertise include the General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA) / California Consumer Privacy Act (CCPA), the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach Bliley Act (GLBA), the Personal Information Protection and Electronic Documents Act (PIPEDA), and many others.