AI and the Art of Avoiding Cyberattacks

March 10, 2023

This is the first blog in a four-part series on AI and cybersecurity. Read the next blog now - Cybersecurity in the AI Era.

 

Artificial Intelligence (AI) and cybersecurity have been in the news a lot lately, with both topics often getting overhyped and misunderstood. It seems every day there a new headline about an amazing AI service in one place and a shocking data breach somewhere else. As technology races ahead, the ability for bad guys to do bad things seems to increase as rapidly. But what do AI and cybersecurity really have to do with each other?

 

Up until recently, AI models have been quite “dumb”: they could only respond to specific tasks when trained on a large dataset providing context on what to find. But, over the last five years, research breakthroughs have taken AI to a whole new level, enabling computers to better understand the meaning behind specific words and phrases with a general training. This is enabling AI tools that are more generally useful than the previous generation. The ease of use and large scale of interest is also impacting cost. It is now easier and cheaper to build AI into your workflow than it has ever been.

 

ChatGPT took the world by storm by demonstrating the power of a large language model (LLM) AI to interact with human language. It has provided creative and useful results across every industry. It also demonstrates a new capability in human / computer interaction. In essence, the AI models behind ChatGPT allow users to “speak” to their data. It’s not yet perfect, but it’s a major advancement in AI, and we can expect other technology companies to soon release competing models.

 

As with any new technology, this new breed of AI model can be used for both good and bad – and this has major implications for the world of cybersecurity. Here’s what we can expect over the coming months.

 

 

AI will impact the cybersecurity industry

ChatGPT has demonstrated that AI is a gold mine of insight that removes much of the work involved in research and problem-solving by enabling users to access the entire corpus of the public internet with just one set of instructions. This means, with this new resource at their fingertips, cybersecurity professionals can quickly and easily access information, search for answers, brainstorm ideas and take steps to detect and protect against threats more quickly. AI has been shown to help write code, identify gaps in knowledge and prepare communications – tasks that enable professionals to perform their daily job responsibilities much more efficiently.

 

AI models might even help close the cybersecurity talent shortage by making individual security professionals significantly more effective – so much so, in fact, that with AI, one person will be able to accomplish the same output as multiple individuals before. It should also help reduce the cybersecurity skills gap by enabling even junior personnel with limited cybersecurity experience to get the answers and knowledge they need almost instantaneously.

 

From a business standpoint, ChatGPT will inform a generation of similar AI tools that can help companies access and use their own data to make better decisions. Where a team and a series of database queries responds today, a chatbot with an AI engine may respond tomorrow. Additionally, because the technology can take on menial, data-driven tasks, organizations may soon reallocate personnel to focus on different initiatives or partner with an AI to add business value.

 

 

Bad guys have access, too

Unfortunately, cybersecurity professionals and businesses aren’t the only parties that can benefit from ChatGPT and similar AI models – cybercriminals can, too. And we’re already seeing bad actors turn to ChatGPT to make cybercrime easier – using it for coding assistance when writing malware and to craft believable phishing emails, for example.

 

The scary thing about these AI models is that they are excellent in imitating human writing. This gives it the potential to be a powerful phishing and social engineering tool. Using the technology, non-native speakers will be able to craft a phishing email with perfect spelling and grammar. And it will also make it much easier for all bad actors to emulate the tone, word selection and style of writing of their intended target – which will make it harder than ever for recipients to distinguish between a legitimate and fraudulent email.

 

Last but certainly not least, AI lowers the barrier to entry for threat actors, enabling even those with limited cybersecurity background and technical skills to carry out a successful attack.

 

 

Ready or not, here it comes

Whether we like it or not, ChatGPT and next-generation AI models are here to stay, which presents us with a choice: we can be afraid of the change and what’s to come, or we can adapt to it and ensure we embrace it holistically by implementing both an offensive and defensive strategy.

 

From an offensive perspective, we can use it to empower workers to be more productive and empower the business to make better decisions. AI can help find new customers, build better products, operate with greater efficiency, and increase satisfaction, retention, and value. It can accelerate trends like no code development and shifts to cloud in beneficial ways.

 

From a defensive standpoint, AI is further widening the attack surface and risk exposure of organizations. Technology for its own sake is a recipe for disaster – strategy, policy, procedures, and protocols must be updated to align and protect against AI-related risks.

 

Risks to consider with AI:

  1. Chatbots and AI are often wrong with high confidence. These tools alone can't replace experienced workers in their domains.
  2. Relying on AI may pose legal and ethical risks, such as data privacy, discrimination, IP rights, and making misleading statements. These technologies could expose your business to more risk than expected.
  3. Newer tools and technologies that use these capabilities may not yet be up to enterprise-grade security. A rush to add these capabilities should not bypass existing company cyber and risk standards and policies.
  4. Employees may enter proprietary information into chat prompts, which could later be exposed or breached.
  5. Sophisticated social engineering and phishing attacks may increase from hackers with/using your own Generative AI tools

 

How to prepare?

  1. Remember humans matter. Just as human error is a significant cause of current cyber incidents, when AI gets involved it should be deployed with human-in-the-loop supervision and validation. The more critical the business function, the more human supervision and oversight it needs.
  2. Treat new AI technologies with the same rigor as any other new technology. Assess these additions from a cybersecurity, privacy, compliance, and risk perspective. This includes implementing end-to-end encryption, authentication processes, monitoring, automated interventions, and education.
  3. Recognize that human error remains the most significant cause of cyber incidents. Continually update training and workforce testing processes to catch AI-related attacks.
  4. Invest in learning about these tools and understanding how they can benefit the business to make well-informed risk/reward decisions.

 

ChatGPT and AI are changing the game for both security professionals and cybercriminals, and we need to be ready. Being aware of the opportunities and challenges associated with this new technology and then putting a holistic strategy in place will help you leverage this new era of AI to drive your business. Ignoring these developments puts it at risk.

 

This is the first part of a series that will explore the intersection of AI and cybersecurity, diving into how they are intertwined and what it could mean for the future of business and technology. We'll look at current trends, uncover their implications for security in our daily lives and in our businesses, and discuss what the future might hold as these cutting-edge technologies continue to spread.

 

Part One: How AI is evolving the threat landscape
As AI technology continues to advance, so too do the cybercriminals who are using it for their own malicious purposes. AI makes it easier to do many tasks, and attackers are using this to increase their volume and sophistication. We have seen and will likely see more sophisticated phishing attacks, deepfake videos and audio recordings, and complex social engineering scams. Furthermore, AI algorithms can be used to autonomously identify and exploit vulnerabilities in computer networks, making criminals’ jobs much easier. Lastly, AI itself is a new and enticing vector for attack. New products and platforms racing to market may leave their customers unknowingly exposed to much higher risk. It is therefore essential businesses and cybersecurity professionals stay informed of the evolving threats of AI and take proactive steps to ensure their safety and security.

 

Part Two: How AI can be (Really) Used in Cybersecurity
AI has become an increasingly popular “feature” of cybersecurity tools in recent years. In this section, we'll explore how AI is being used to detect and prevent cyber-attacks. We’ll also touch on some of the ways that it is evolving. Recent developments with generative AI signal a new potential for AI to enhance the workflow of cybersecurity teams. But there are also pitfalls here – it won’t be a seamless transition. Ultimately many of the success factors for AI-enabled cybersecurity are linked to having a successful human-run program; clear policies, clean data, governance, and procedures. When companies embrace the difficult work of maturing their cybersecurity programs they will find AI ready to assist.

 

Part Three: The Future of Cybersecurity and AI
The pace of AI development is increasing thanks to new research, increased computing power, and widespread availability of data. The future is full of possibilities to massively change the way we work and live. In this section, we'll speculate on what the future holds at the intersection of cybersecurity and AI. We'll make some educated guesses and discuss some examples of the future coming sooner rather than later. We’ll also look at the risks and challenges of relying too heavily on AI in cybersecurity, and the potential for new AI-based cyber threats to emerge and how to prepare for them.

 

At the end of this series, we hope you'll have a better understanding of how AI and cybersecurity are shaping our digital world and what opportunities and risks lie ahead. The intersection of AI and cybersecurity is a fascinating topic and we hope you’ll reach out with your perspectives, observations, questions, and corrections! Read Part One next.

Randy Lariar
Practice Director - Big Data & Analytics | Optiv
Randy leads Optiv’s Big Data and Analytics practice, a part of the Cyber Digital Transformation business unit. He helps large firms to build teams, lead programs, and solve problems at the intersection of technology, data, analytics, operations, and strategy.

Randy leads a cross-functional team of data engineers, data scientists, and cybersecurity professionals who advise and implement value-aligned solutions for large-scale and fast-moving data environments. His clients include top firms managing their cyber-related data as well as organizations seeking to unlock new insights and automations.

Optiv Security: Secure greatness.®

Optiv is the cyber advisory and solutions leader, delivering strategic and technical expertise to nearly 6,000 companies across every major industry. We partner with organizations to advise, deploy and operate complete cybersecurity programs from strategy and managed security services to risk, integration and technology solutions. With clients at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can. At Optiv, we manage cyber risk so you can secure your full potential. For more information, visit www.optiv.com.

Would you like to speak to an advisor?

How can we help you today?

Image
field-guide-cloud-list-image@2x.jpg
Cybersecurity Field Guide #13: A Practical Approach to Securing Your Cloud Transformation
Image
OptivCon
Register for an Upcoming OptivCon

Ready to speak to an Optiv expert to discuss your security needs?