How to vet security providers’ AI claims

Artificial intelligence AI and CRM software automation technology.business use AI smart technology sales reports recording the sale of goods and making tax invoice



Most security companies say they use artificial intelligence to help thwart cyber-attacks, but what they really mean can vary dramatically. So how can organization’s vet providers’ claims in this area? In this piece, we’ll look under the covers at how best to use AI in one of the most critical cyber security realms: extended detection and response (XDR).

AI can be a game-changer in helping identify intrusions, and the steps an intruder takes after gaining entry that signal nefarious intent. But you must know what questions to ask a provider to determine how they’re employing AI.

Amid the onslaught of alerts SOC teams receive, there are fundamentally three basic questions SOC analysts need answered, says Mark Wojtasiak, VP of research and strategy for Vectra AI

  • Is this real?
  • Do I care?
  • How urgent is it?

AI is highly effective at helping identify threats you should care about because they are real and may be urgent. One of the ways it does so is by working from models that mimic known attacks. Such models show how seemingly innocuous events correlate to form a pattern indicating an attack. By identifying alerts that conform to such models, an AI platform can effectively bundle numerous alerts into one. That eliminates much of the “noise” security analysts must deal with and points them to potentially dangerous behavior.

Vectra AI, for example, has more than 150 models spanning neural networks, supervised ML, unsupervised ML, and 12 references for MITRE D3FEND. It also takes what it learns from each of its 1,500 customers to validate and improve its AI capabilities continually.

“There are cases where we’ve identified new attacker techniques and developed detections before they are published in MITRE ATT&CK, which means our customers get continuous coverage for new attack techniques without any detection engineering work,” Wojtasiak says.

Another critical question to ask is whether an AI engine can distinguish normal from abnormal traffic and behavior. AI technologies such as unsupervised machine learning can learn over time, on their own, what constitutes normal behavior and traffic patterns. Combined with anomaly detection technology in the XDR platform, AI can find and report on different phases of a potential attack, such as an intruder exploring the network, evaluating hosts for attack, and using stolen credentials.

Any AI engine is only as good as the data it trains on, so it’s also important to ask if your security vendor has its own security research team actively working to identify the latest malware, attack tools, techniques, and procedures. The research team’s findings can then be incorporated into the AI engine’s training data, enabling it to identify the latest threats.

At this point you may be wondering about data privacy, as you should. AI tools should not be able to access data, whether encrypted or otherwise. To ensure they aren’t, verify your security vendor’s AI tools work only from packet metadata, rather than performing deep packet inspection.

Used correctly, AI can help make your security team more productive by culling alerts from your XDR and other security platforms. Be sure your vendor is using AI effectively: ask the right questions. 

Learn more here.



Source link
lol

Most security companies say they use artificial intelligence to help thwart cyber-attacks, but what they really mean can vary dramatically. So how can organization’s vet providers’ claims in this area? In this piece, we’ll look under the covers at how best to use AI in one of the most critical cyber security realms: extended detection…

Leave a Reply

Your email address will not be published. Required fields are marked *