#AskOptiv: Defending Against AI Weaponized Threats

#AskOptiv: Defending Against AI Weaponized Threats

Question: There’s a lot of ink in the wild about threat actors using artificial intelligence (AI). Is using AI the only way to provide defense against this sort of attack, or is there another way? If using AI for defense is the only option, what practical and reasonable steps can be taken to minimize collateral damage/unintended consequences? -EM, Denver

 

Really good questions, EM.

 

2020 is the year for us to seriously consider artificial intelligence (AI) and how it impacts operations, including those involving of threat actors.

 

With all the buzz around the very broad concepts of automation and AI within computer science we should begin by briefly discussing exactly what artificial intelligence is and is not. AI within the context of computer science is defined by Optiv as the following:

 

Cyber intelligence agents used to learn and adapt to perform cognitive actions.

 

A cyber intelligence agent may be a device or software, including malware. John McCarthy originally coined the term in 1956 and at that time he was focused more on the science and creation of AI instead of the outcomes:

 

The science and engineering of making intelligent machines.

 

We now think of it in terms of learning, reasoning, problem solving, perception and language. This may also include creativity and “thinking outside the box” cognitive outcomes. We’re starting to see more and more actual use cases these days, including things like autonomous cars, human speech recognition and adaptive strategic operations such as military actions. For example, the US DoD published a 2018 summary of their artificial intelligence strategy, including the use of AI to protect US infrastructure. (And AI has already proven it can beat even the best humans at complex games like chess, Go and even Jeopardy.)

 

AI Malware Threats

 

There are no substantiated AI-related malware threats in the wild as of today. A lot of attention is being devoted to the space, though. DeepLocker, authored by IBM, is an AI-powered attack tool designed to explore the technology can be weaponized in malware. Capable of facial recognition, geolocation and voice recognition, Deeplocker is stealthy, highly evasive and “ultra-targeted.” A deep neural network (DNN) AI model is implemented to perform AI operations to identify specific conditions of interest in targeting a victim. Once conditions are met the DNN AI model enables malware to perform attack actions against the identified target. DeepLocker also helps researchers study how AI can change the tools, tactics and procedures (TTPs) of conventional attacks.

 

It’s been theorized that AI can be used to scan social media to identify targets of interest for spear phishing campaigns, with automated social engineering and improved customization used to improve attack success rates. It’s also be theorized that deepfakes can be leveraged in targeted malicious attacks to extort or manipulate perception and truth. This is of increasing concern given how convincing and persuasive deepfakes and misinformation campaigns are currently. What should we expect once they’re powered by AI models and synthetic augmentation techniques and deployed in the manipulation of modern media.

 

Intelligent Machine

 

 

Countermeasures for AI Related Malware

Machine learning (ML) and AI applied to threat detection is the first step in helping the security industry identify and prevent AI-based attacks. It’s assumed that threat actor payloads and attacks, including TTPs, are dynamic and ever-changing. A robust intelligence approach to processing big data, indicators of compromise (IOCs) and their context, coupled with enrichment, reputational data, detonation data and additional context is a huge undertaking. Leveraging ML and AI are essential to the timely and efficient processing of data (in addition to enhancing threat detection).

 

One possible use case involves the development of a ML/AI solution to detect a spam wave campaign underway in the wild. Common TTPs for this today involve abuse the email vector, multi-wave minor and/or unique cryptographic checksum hash value malware variants and some common infrastructure if remote command and control infrastructure is used. It’s also common to target specific sectors (although this doesn’t always happen, as in the Carbanak [G0008] threat actor group). The manual, slow and inconsistent method relies on threat analysts examining individual tickets to attempt to quickly identify a potential threat and then informing a client or internal team of the threat. ML/AI can be used to process vast amounts of data across multiple client environments and tickets in real time, correlating those, providing granular attribution, coupled with orchestration and automation actions like auto-escalate, auto-notify, and auto-defend actions (e.g. take an infected endpoint offline).

 

Last February Microsoft was able to successfully implement ML (built into Windows Defense AV) to detect and mitigate Emotet malware. Emotet is a mature threat that is well known for its polymorphic capabilities, making it next to impossible to detect the next variant in the campaign using signature-based strategies. Detection of Emotet using AI was achieved through modeling in a decision tree, related to probabilities, weighted components and calculations performed by the tool. It also included real-time cloud machine learning across the Windows Defender complex ML models. This enabled deep learning and real-time protection within an AI solution set to successfully detect and block Emotet. There is great promise for AI when applied to advanced context and modeling as shown in this early real-world example.

 

Conclusion

 

AI must be adopted and implemented in a well-considered, deliberate fashion with an initial emphasis on manual execution and consistently capable outcomes. Once this is accomplished organizations can then implement components of orchestration and automation towards long-term AI goals. The use of ML/AI today is best performed to streamline operations in a big data world that’s constantly changing. Shoddy implementation is a far greater threat than actual AI-empowered malware today.

 

As AI becomes a greater presence in the cybersecurity landscape, how organizations position and defend will separate survivors from victims. This is especially true for organizations that embrace the need to transform, leveraging AI to help wade through big data, contextualized modeling and decisions that need to be made to operationalize security in 2020 and beyond.

 

#Ask Us

 

Got a cybersecurity or cyber digital transformation question? Send it to us via email or Twitter using #AskOptiv hashtag.

Ken Dunham
Senior Director, Technical Cyber Threat Intelligence
Ken Dunham has spent 30 years in cybersecurity, consulting in adversarial counterintelligence, forensics, Darknet Special Ops, phishing and hacking schemes, AI/BI, machine learning and threat identification.
Would you like to speak to an advisor?

How can we help you today?

Image
field-guide-cloud-list-image@2x.jpg
Cybersecurity Field Guide #13: A Practical Approach to Securing Your Cloud Transformation
Image
OptivCon
Register for an Upcoming OptivCon

Ready to speak to an Optiv expert to discuss your security needs?