Cover Image for How to Defend Against AI Threats with MITRE ATT&CK ATLAS
Blog

How to Defend Against AI Threats with MITRE ATT&CK ATLAS

09.26.23 | By Brian Greunke

We’ve talked about using large language models (LLMs) for a modern SOC, for cyber threat intelligence (CTI), and for code analysis. As the use (and misuse) of AI and machine learning (ML) operations and tools becomes more prevalent, we are frequently asked about the security implications around AI and ML. It’s no secret that we are huge proponents of incorporating MITRE frameworks into our modern Security Operations. MITRE ATLAS is a framework created to address these concerns. Let’s dig into how defenders can use the newly released MITRE ATLAS to reduce ML risk.

Similarities to ATT&CK + New tactics and techniques

Just like ATT&CK, ATLAS is broken into a progression of tactics of which there is significant overlap with ATT&CK, excepting a few key details. Just like in ATT&CK, adversaries may (in no particular order) conduct recon, dev, access the platform, execute, persist, evade, discover, collect, and exfiltrate with varying impacts. But a focus on ML introduces two new tactics and two new techniques.

Tactic no. 1: Machine learning model access

Machine learning models are often a critical piece of logic or IP inside of ML operations. They can take large amounts of resources to develop, may be trained on private data, may be used for critical components of end software, and are often purposefully segmented from the final product. Access to the model can enable an attacker to craft follow on attacks on not only the ML process, but the systems using the model.

Tactic no. 2: Machine learning attack staging

When attacking a model, an adversary may need to prepare for the attack. They may use knowledge gained during previous steps to begin staging, much of which may be conducted offline and is therefore difficult to detect.

Each of the tactics contains new techniques unique to ATLAS (vs ATT&CK). We don’t need to hit each one, but let’s examine a couple examples.

Technique no. 1: Machine learning enabled product or service

Consider the “Proof Pudding” evasion technique in ProofPoint email protection. ProofPoint used an ML engine to score emails creating an ML Enabled Product. As part of the process, they appended these scores to the email which researchers at Silent Break Security were able to then indirectly access via the ML engine’s outputs. So simply by using the ML engine in a product, the attack surface area was exposed.

Technique no. 2: Publish poisoned datasets

Building ML tooling often requires significant amounts of data. This data is sometimes gathered from public or open sources on the internet (e.g. websites like Reddit). If an adversary can generate and publish a set of “training data” which they have modified to either skew the training in a particular direction, or break the training process in some way, they can effectively poison the training process.

Mitigations

As defenders, we care about both:

  1. Understanding how an adversary can attack our systems
  2. Understanding how to prevent or mitigate the attack

Just like ATT&CK, ATLAS provides mitigations for some techniques. Many are variations on good software development practices like verifying output and artifacts (Verify ML Artifacts or Code Signing) but others are unique to ML such as Model Hardening or Sanitize Training Data. Like ATT&CK, these mitigations are built at a high abstraction layer and need to be thoughtfully considered and applied by knowledgeable people and processes.

A Modern SOC can defend ML Operations

Adversaries will continue to take the paths of least resistance when attempting to compromise systems and infrastructure containing targets, whether those targets are data, software, or ML systems. The high level TTPs they will use to attack ML systems, or ML-enabled systems are very similar to the existing TTPs outlined in ATT&CK. The people, processes, and technologies in a modern SOC, aligned to MITRE ATT&CK, are a critical step toward defending all systems, including ML. Afterall, if an ML engineer downloads a ransomware payload to his work machine, that’s not an “ML security problem.”

But there are new attack vectors present when using new technology. Understanding how an adversary can attack can leverage these gaps, and how we, as defenders, can mitigate or detect the actions is necessary. If an organization is using or building ML platforms, applying both ATT&CK and ATLAS is warranted.

Application Security is necessary for secure ML

Application security and API security are integral parts of security focused on ML and AI. The best practices we use and promote for application security are still relevant when protecting ML systems and operations. Threat modeling, DevSecOps, and code analysis are necessary and appropriate for software that uses, creates, and consumes AI or ML. If an API using an ML engine doesn’t throttle requests, this is a problem with the API. If an application allows unauthenticated access to download the ML model, that’s poor AppSec.

If you are interested in learning more about how we consider MITRE ATT&CK inside of our Modern SOC or how we approach Application Security, reach out to us at info@meetascent.com for more information.

Share this Post
Whether you’re starting your cybersecurity journey or you’re improving your security posture, our team is passionate about protecting your people and business.
content
Thought Leadership
Microsoft’s 2024 Digital Defense Report: 3 Takeaways
content
Blog
Detectionomics: How to Optimize Your Ingest Costs
content
News
Ascent Solutions Announces Sales Leader Steve Thompson as SVP for Revenue Acquisition