Leveraging Artificial Intelligence in Cloud Security Is the Way Forward

April 16, 2025

ISACA’s recent State of Cybersecurity 2024 says that more than four in five cybersecurity professionals feel more stressed due to an increasingly complex threat environment. At the same time, over half the cybersecurity teams are somewhat or significantly understaffed.

 

This twofold blow of increasing threat complexity and understaffed security teams is causing organizations to adopt new tools including AI for cloud threat detection, anomaly identification and compliance breach monitoring.

 

Adopting the best practices for cloud-based AI/ML tools will help detect and prevent known attacks such as SQL injections, DDoS and cross-site scripting. AI-powered security tools also enable training AI/ ML models to detect new attacks based on a specific industry threat profile.

 

Without artificial intelligence cybersecurity tools to assist with automation and triage, organizations risk over depending on manual interventions and higher threat detection times. This, in turn, leads to increased breach rates, restoration costs and reputational damage.

 

 

How to Implement AI in Cloud Security

AI-based identity and access management (IAM), incident response capabilities, continuous threat monitoring and compliance management are the four cornerstones of a robust cloud security foundation.

 

An IAM solution that authenticates users and manages their individual permissions helps reduce the risk of unauthorized access. IAM also helps cybersecurity teams react quickly to security alerts when policy breaches or unauthorized information sharing occur.

 

There is a perpetual shortage of experienced incident responders, investigators and malware reverse engineers, who are always in high demand. However, security teams can augment this talent shortfall to a large extent by investing in proactive incident response capabilities that facilitate quick remediation and restoration of business operations in the event of an attack. These capabilities can be built organically in an organization or by partnering with outside firms to help with investigations, but especially with environment remediation and restoration.

 

Threat monitoring capabilities harden cloud security by improving visibility across all regional, departmental and operational silos. Such an investment helps security teams to continuously adapt to new data sources, rule sets and cloud assets, even as the cloud infrastructure configurations change. Many clients we talk with struggle to hire talented cloud employees into their SOCs, but it is imperative that these skills are added. Without this skill set, the SOC will have a blind spot that heavily relies on tools and vendor content to detect threats. 

 

Compliance management software ensures centralized control over all sensitive data assets, endpoints and system-level permissions to meet compliance regulations. Such an investment is crucial for tightly regulated industries such as finance, healthcare and defense.

 

In addition to the above four areas, customized AI/ML models can enable faster threat detection, reduce costs, improve compliance and enhance resilience.

 

For example, organizations can consider using RAG (Retrieval-Augmented Generation) to combine business-specific expertise with generative AI capabilities to create a personalized chatbot for their cybersecurity team. Such a chatbot could answer important questions or retrieve key information for analysts, ensuring better compliance with cybersecurity policies. So, analysts would not have to spend extra effort searching for a specific data governance policy. This capability will enrich alerts and allow analysts to make decisions faster by not needing to review multiple tools to determine the overall scope of an event.

 

Similarly, organizations may consider building a custom LLM agent that tracks and summarizes all cybersecurity threats and incidents. Because generative AI is good at summarizing vast amounts of data, it can quickly help cybersecurity teams uncover key insights across thousands of attack vectors and patterns. Such an AI agent could also provide teams with the necessary information to respond appropriately.

 

 

Adopt These 6 Cybersecurity Best Practices for Cloud-Based AI Tools

Harden cloud infrastructure by adopting the following six AI cybersecurity best practices:

  1. Set security objectives that align with a specific industry threat profile. Define clear goals for how AI will improve an organization’s threat detection rates, data protection KPIs and automation goals
  2. Use continuous monitoring tools to monitor cloud environments in real time. Early anomaly detection helps prevent costly attacks
  3. Leverage AI tools to reduce systemic weaknesses. Use AI tools with multi-layered security controls such as network segmentation and IAM
  4. Automate incident response using proven frameworks such as NIST Cybersecurity and MITRE ATT&CK and D3FEND. Train personalized AI/ ML models to detect ransomware, IP theft and malware patterns based on a specific industry threat profile
  5. Review and update AI models periodically to identify emerging dangers. Retraining and finetuning these purpose-built AI/ ML models frequently improves chances of success
  6. Adopt zero-trust principles by granting all users and services the fewest possible permissions. Fewer privileges reduce the possibility of unintentional or unauthorized data sharing

 

Hiring Experts vs. Implementing Internally: What’s Right?

The correct implementation route will depend on an organization's cybersecurity capabilities. As a rule of thumb, internally implementing an AI cybersecurity transformation project is recommended only for teams with a wide range of skills and experience. All other organizations must consider partnering with an external team of cybersecurity experts.

 

Cybersecurity teams can use this checklist below to understand if they have the depth and breadth of cybersecurity skills to pull off an internal implementation:

  • Do you have in-house expertise to formulate a holistic AI cybersecurity roadmap?
  • Do you have a team of trusted experts who can advise on AI security at strategic, programmatic and technical levels?
  • Do you have access to consultants who can help pinpoint the value and opportunities AI offers?
  • Have you previously partnered with solution providers to solve business problems?
  • Can you prioritize security budgets and focus on business impact?
  • Do you have expertise in cybersecurity risk, AI modeling and data governance?
  • Have you documented and prioritized AI risks relevant to your business?
  • Do you understand the various risk mitigation or management options available?
  • Can you enable a workforce to securely implement, use and maintain AI tools?
  • Do you know how to align your AI tools with governance and risk management efforts?

 

 

The Challenges of Using Artificial Intelligence in Cybersecurity

Implementing AI cybersecurity tools will give rise to unknown problems like hallucinations, data usage and privacy concerns.

 

AI tools are prone to hallucinations. For instance, an AI-powered threat response tool may make false recommendations based on incorrect assumptions or causations. So, security teams need appropriate AI protocols, such as a manual validation or a fact-checking step, to identify such hallucinations. AI tools should be treated as an assistant and not an expert for this reason. Keeping humans in the loop is critical to ensuring the output of the tools is valid.

 

AI tools are also prone to behave like black boxes. For example, an AI model may assign threat probability values based on several factors but may not share enough information on how it arrived at the answer. So, security analysts may find it hard to accept AI recommendations at face value, especially when they don’t know how much confidence can be placed. AI tools work best when the algorithms working behind the scenes are transparent and can be fine-tuned. If a tool set is not providing transparent information, you are being asked to “trust us” with your investigation. This may be ok depending on your risk tolerance, but a more open system –where verification can occur and the process can be fully understood – is recommended. 

 

Using AI tools may also lead to software sprawl, where several different tools and apps are used to accomplish business tasks. Software sprawl, in turn, leads to a larger security footprint and higher chances of threats, effectively increasing cybersecurity complexity. However, a robust cloud security strategy and periodic risk reporting will save hundreds of thousands of dollars. Having a cloud governance committee with representatives that understand AI and the AI tools being used or evaluated can help reduce the risk and ensure the overall cloud strategy is considered when purchasing new tools.

 

 

Find Your Right Mix of AI Gains vs. Cybersecurity Risks

Every organization must balance AI gains against security risks based on its size, cybersecurity maturity, industry threat profile and past track record of remediating attacks.

 

Optiv helps you craft a holistic and personalized cloud and AI security strategy aligned with your business goals. By leveraging best practices for cloud-based AI tools, Optiv ensures that your organization maximizes its cybersecurity potential while maintaining compliance and operational efficiency. Our team of experts will help you with AI governance and process as well as identify and integrate the right cybersecurity solutions in a phased and structured manner, without hindering innovation.

 

Contact us today to create a secure cloud-based AI infrastructure and govern your AI developments.

Marty McDonald
Sr. Demand & Delivery Manager, CDAS | Optiv
Marty is a subject matter expert in the design and implementation of security incident and event management (SIEM) systems and is well versed in creating detection mechanisms that enhance security operation centers and compliance effectiveness. He has 20 years of deep cyber security industry experience gained from a variety of value-added resellers and solutions integrators. Prior roles include Senior Consultant in Security Intelligence for Datalink and Senior Consultant in the Technology Solutions Delivery organization at Accenture.