To defeat AI attacks, fight fire with more fire
- by nlqip
In an era of unprecedented technological advancement, the adoption of AI continues to rise. However, with the proliferation of this powerful technology, a darker side is emerging. Increasingly, malicious actors are using AI to enhance every stage of an attack. Cybercriminals are using AI to support a multitude of malicious activities, ranging from bypassing algorithms that detect social engineering to mimicking human behavior through activities such as AI audio spoofing and creating other deepfakes. Among these clever tactics, attackers are also relying on generative AI to level up their activities as well, using Large Language Models to create more believable phishing and spear-phishing campaigns to collect sensitive data that can be used for malicious purposes.
Defending against adversaries—especially as they adopt new technologies that make attacks easier and faster—demands a proactive approach. Defenders must not only understand the potential dangers of the use of AI and ML among threat actors, but also harness its potential to combat this new era of cybercrime. In this battle against bad actors, we must fight fire with more fire.
AI and generative AI are changing the threat landscape
Cyber adversaries are always increasing the sophistication level of their attacks. From emerging attack varieties to increasingly damaging attacks, the threat landscape is evolving rapidly. The average time between when an attacker first breaches a network to when they’re discovered, for instance, is about six months. These developments pose serious risks. And as organizations implement digital transformation, they introduce new risks as well.
AI, and especially generative AI, are fueling additional risk. AI technology allows malware campaigns to develop dynamic attack scenarios, like spear-phishing, with various combinations of tactics directed at an organization’s system, particularly with defense evasion tactics.
The ML models adversaries are using allow them to better predict weak passwords, while chatbots and deepfakes can help them impersonate people and organizations in an eerily realistic manner, like a “CEO” convincingly approaching a low-level employee.
Bad actors are manipulating generative AI into producing reconnaissance tools that let them get the chat histories of users as well as personally identifiable information like names, email addresses, and credit card details.
This is by no means an exhaustive list of AI’s potential for cybercriminals. Rather, it’s a sampling of what is currently possible. As bad actors continue to innovate, a host of new threats is sure to arise.
Combatting the threats
To protect against attacks like these, organizations need to factor automation, AI and machine learning into their defense equation. It’s important to know the different capabilities of these technologies and understand that they are all necessary.
Let’s consider automation first. Think of a threat feed that includes threat intelligence and active policies. Automation plays a significant role in assisting with the volume of detections and policies required at speed, accelerating response times and delegating routine chores away from SOC analysts to concentrate on areas where their analytical skills can be applied in a way that machines can’t. Organizations can gradually add automated capabilities, beginning, for instance, with orchestration and what-if scenarios in an analysis tool like a SIEM or SOAR.
Security teams use AI and ML for the unknown threats. ML is the learning component, whereas AI is the actionable component. Each application may employ a different machine learning model. ML for zero-day malware is completely unrelated to machine learning for web threats.
Organizations need AI and ML capabilities to defend against a variety of attack vectors. Applying AI and ML significantly lowers your risk. Additionally, since you don’t need to hire more people to fix the problem, you are reducing costs from your OpEx model.
A key initial use case is to implement AI-powered endpoint technology like EDR to provide full visibility of activities. And although adopting solutions that use AI and ML models to detect known and unknown threats will be beneficial, where an organization can differentiate itself is by using AI for rapid security decision-making. While AI is not a panacea, it can improve cybersecurity at scale by giving organizations the agility they need to respond to a constantly shifting threat environment.
By learning the pattern of these attacks, AI technologies offer a powerful way to defend against spear-phishing and other malware threats. Organizations should consider an endpoint and sandboxing solution that is outfitted with AI technology as the first step.
Defenders may have the edge
In a rarity for the cybersecurity world, AI is one area where security professionals are already gaining ground. There are AI tools available now that have increasingly sophisticated capabilities to defeat sophisticated attacks. For instance, AI-powered network detection and response (NDR) can detect signs of sophisticated cyberattacks, take over intensive human analyst functions via Deep Neural Networks, and identify compromised users and agentless devices.
Another new offensive security project is known as AutoGPT, an open-source project that aims to automate GPT-4 and has potential as a useful tool for cybersecurity. It can look at a problem, dissect it into smaller components, decide what has to be done, and decide how to carry out each step and then take action (with or without user input and consent), including improving the process as needed. The ML models powering these tools have the potential to support defenders in the detection of zero-day threats, malware, and more. Currently, these tools must rely on tried-and-true attack strategies that have been proven effective in order to produce good results, but progress continues.
Fire away
As attackers increasingly use AI, defenders need to not only follow suit but stay ahead using bad actors’ technologies both defensively and offensively—fighting fire with more fire. To combat the evolving threat landscape, organizations need to incorporate automation, AI and machine learning into their cybersecurity strategies. By using AI for decisionmaking, staying informed, and exploring new offensive security tools, defenders can enhance their ability to combat AI-driven attacks and safeguard their digital assets in an increasingly complex threat landscape.
Source link
ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde ddde