2025 Cybersecurity Predictions | F5 Labs
- by nlqip
With AI amplifying their capabilities, the speed and sophistication of these attacks would be unprecedented, making them extremely difficult to mitigate.
Prediction 2: Putting the AI Into API
We are currently in a global “AI race condition,” where organizations—from startups to nation-states—are racing to adopt AI-driven technologies at unprecedented speeds, fearing that “if we don’t, ‘they’ will.” This frenzied adoption is creating a dangerous feedback loop: the more AI is used, the more complex the ecosystem becomes, requiring AI to manage that very complexity. Amid this rush, a critical vulnerability is emerging—the APIs that serve as the backbone of AI systems.
APIs are essential for AI operations, from model training to deployment and integration into applications. Yet, the rapid adoption of AI is outpacing the ability of organizations to secure these APIs effectively. Many remain unaware of the full scope of their API ecosystems with current estimates suggesting that nearly 50% of APIs remain unmonitored and unmanaged. This exposes organizations to risks that are amplified by the sensitive data processed by large language models (LLMs).
We have already seen much evidence of LLMs leaking sensitive information via the front door – the AI application itself. However, hastily developed AI applications will also expose vast quantities of sensitive personal data via poorly secured APIs. As F5’s Chuck Herrin aptly notes, “a world of AI is a world of APIs.” Organizations that fail to secure their API ecosystems will not only invite breaches but risk undermining the very trust they are trying to build with AI.
In the coming year, we fully anticipate huge amounts of personal and company data to be stolen via an API. Based on previous breaches and attacker trends, we expect to see the Healthcare industry hit and the targeted API to exploit a lack of authorization controls and rate limiting.
Prediction 3: Attackers Use AI to Discover New Vulnerabilities
In November 2024, Project Zero announced that AI had, for the first time, discovered an unknown (0-day) vulnerability. Google’s team of security researchers are responsible for discovering countless vulnerabilities in software products but, until recently, had done so with traditional security testing tools and techniques. Their announcement showed how their AI-powered framework had found a stack buffer underflow in the widely popular database system SQLite. This vulnerability was previously undetected despite SQLite being rigorously tested using traditional techniques, such as fuzzing.
This advancement is incredibly exciting for security researchers as it promises to drastically improve the speed and efficacy of their work. However, whilst we expect to hear many more announcements from similar efforts by researchers over the coming year, it seems far less likely that threat actors will announce their own use of AI vulnerability discovery tools. It seems only too likely that well funded nation-states have already begun efforts to discover vulnerabilities which they will keep to themselves for use with espionage and active cyber attacks. As these AI-powered tools become commoditized we can expect organized crime groups to follow suit.
As always, we find ourselves in an arms race. Can security researchers use AI to discover and patch vulnerabilities before threat actors are able to exploit them?
Prediction 4: AI Beats Quantum Cracking Crypto
Quantum computers remain a looming threat to traditional cryptographic systems like RSA, but their practical ability to break modern encryption standards is still in its infancy. A recent example comes from Chinese researchers who used quantum computing to factor 50-bit integers, a far cry from the 2048-bit RSA keys currently considered a minimum for use with TLS. Although their method demonstrates technical advancements in quantum algorithms, it is not yet scalable to break the exponentially larger keys used in practice. Experts suggest that it will likely require quantum computers with millions of qubits—far beyond current capabilities—to challenge RSA encryption directly.
Perhaps it is no surprise, then, that AI is emerging as the most significant threat to cryptographic security, not by directly breaking encryption algorithms, but by exploiting vulnerabilities in their implementation. Machine learning models have already been successfully applied to drastically reduce the time required to recover hashed passwords, and significantly improve the accuracy and speed of recovering AES keys from a computer’s RAM., These so-called “side-channel” attacks focus not on cryptographic primitives (such as algorithms like AES), but on the “noise” created by their use. A threat actor who is able to monitor and analyze physical signals, such as power consumption, electromagnetic emissions, or processing times, can infer information from that and use it to reconstruct the secret key.
The more cryptographers and manufacturers tighten up side-channel emissions the more we will depend on AI to help analyze the ever decreasing amount of information leaked by these algorithms. While quantum computing seems many decades away from directly cracking crypto, we predict that 2025 will see a number of advancements in using AI to break traditional cryptography.
The Fragmented Internet
The open web, once heralded as a place for democracy and open information sharing, is rapidly changing. As global geopolitical tension increases, the web is at risk of becoming more fragmented than ever.
Prediction 5: Russia Disconnects
Russia has long been testing its ability to disconnect from the global internet through its “sovereign internet” initiative, which aims to create an independent network—Runet—that can operate without external connectivity. While the Kremlin cites national security and defense against cyberattacks as motivations, critics argue the move is a step toward greater censorship and information control. A full disconnection, though technically and economically challenging, could isolate Russian citizens and businesses from global services while strengthening the government’s grip on domestic narratives. However, it would also set a precedent for other nations to fragment the internet, threatening the open, interconnected nature of global communication.
Despite a potential disconnection, Russian based threat actors are unlikely to abandon their use of phishing scams, web hacks, and disinformation campaigns against the West. Instead, they could adapt by using overseas agents, proxy networks, and third-party collaborators to carry out these operations. A disconnected Russia could function as a “black box,” where cyber activities originating from within the country become harder to trace. This would complicate attribution, making it more challenging to impose sanctions or hold Russian actors accountable. Without clear digital links to Russian infrastructure, the West might have to rely on circumstantial evidence or geopolitical analysis, giving Moscow plausible deniability.
While Russia has been attempting to disconnect from the global internet for many years, they appear to be closer than ever, and 2025 could be the year they pull the plug.
Events like China’s Matrix Cup and Iran’s Cyber Olympics showcase the growing role of state-sponsored hacking competitions in the threat landscape. While hackathons are traditionally seen as positive initiatives that help uncover and patch vulnerabilities, the concern with state-backed events lies in their potential to conceal discoveries. The most recent Matrix Cup concluded with no results being displayed on the website and no claims of discovered vulnerabilities. This is consistent with the Chinese law which states that zero-day vulnerabilities must be disclosed only to the Chinese Government.
Rather than responsibly disclosing vulnerabilities, participants in these competitions are likely to see their findings weaponized for cyber espionage, intellectual property theft, or infrastructure sabotage. These events cultivate highly skilled hackers who may be recruited by state actors to carry out sophisticated campaigns against adversaries.
The potential fallout is significant. By leveraging the zero-day vulnerabilities uncovered at these competitions, state-sponsored groups can develop more innovative and destructive attack methods. These include highly targeted intrusions, large-scale data breaches, and critical infrastructure disruptions that could destabilize entire regions. As these vulnerabilities remain unpatched and exploited in secret, the rapid advancement of state cyber capabilities outpaces global defensive measures, deepening the asymmetry between attackers and defenders.
Prediction 7: APTs Will Make Attacks Look Like They Come From Hacktivists
APT 28 (Fancy Bear), linked to Russia, has been known to leak stolen data through platforms designed to appear as independent whistleblowers or hacktivist entities, such as DCLeaks during the 2016 U.S. election. Iran’s Charming Kitten has also used social media personas and fake activist groups to distribute disinformation or mask espionage campaigns. These methods obscure state connections, making it harder for investigators to directly attribute attacks to nation-states.
Attributing cyberattacks to a specific APT or nation-state is often challenging, but when successful, a common response is to impose sanctions on the country linked to the attack. To mitigate the economic impact of such measures, more APTs are likely to adopt tactics designed to make their operations appear to originate from unskilled hacktivists rather than the highly trained operatives employed by intelligence agencies. By posing as grassroots actors, state-sponsored groups aim to confuse defenders, reduce the likelihood of decisive political responses, and maintain plausible deniability. This strategy further complicates the task of identifying state involvement, shifting blame toward independent or ideologically motivated individuals.
Prediction 8: Cloud Comes Home
By 2025, the rise of supply chain nationalism will likely prompt a fundamental rethinking of digital architectures, going beyond simple reshoring to encompass a broader shift in how businesses manage risk and resilience. As geopolitical tensions rise and new tariffs take effect, organizations will be forced to reconcile efficiency mandates with the growing need to localize their supply chains. This balance will create new classes of systemic risk, as companies attempt to “do more with less” and accelerate digital transformation while navigating stricter regulatory environments. A growing focus on reshoring will likely come with challenges, particularly in industries where key components cannot be quickly relocated or where strategic gaps in production may cause critical shortages.
At the same time, as governments push for efficiency and heightened self-reliance, the quality of supplier due diligence is likely to suffer. This may lead to increased third- and fourth-party risks, as businesses work with fewer vendors to reduce complexity and manage costs, often at the expense of thorough vetting. This is especially true in the context of sovereign cloud initiatives and geofencing efforts, which could create additional layers of complexity. With the acceleration of automation, AI adoption will also be a key tool for navigating these challenges, allowing organizations to consolidate platforms, reduce risk, and streamline processes. However, this push for efficiency and speed could further expose vulnerabilities in critical systems and complicate the governance of increasingly complex, localized supply chains.
Conclusion
AI is undeniably shaping our future, presenting both immense potential and significant risks. While it can serve as a tool for innovation and problem-solving, it also brings forth challenges that could lead to destruction or harm. Much like any tool, AI can be harnessed for both good and ill. Its increasing integration into complex systems adds layers of dependency and risk.
As organizations race to adopt their own AI solutions, a pressing question arises: how can AI-enhanced attacks be detected and blocked? The current reality is that such attacks, even when crafted by attacker-controlled AI, often resemble existing tools, techniques, and procedures (TTPs). These attacks typically employ known techniques, use residential proxy networks to obscure origins, and employ various methods to mimic human interactions with applications.
For now, these methods remain rooted in familiar tactics. However, as AI advances to uncover previously unknown vulnerabilities and develop novel exploits, attacks may become increasingly complex, potentially identifiable as AI-crafted due to their sophistication. Looking ahead, the majority of attacks may originate from attacker-controlled AI systems, evolving into a scenario where artificial general intelligence (AGI) could autonomously execute directives, carrying out attacks without human oversight. The implications of such a future demand proactive, forward-thinking defenses and robust AI governance.
Source link
lol
With AI amplifying their capabilities, the speed and sophistication of these attacks would be unprecedented, making them extremely difficult to mitigate. Prediction 2: Putting the AI Into API We are currently in a global “AI race condition,” where organizations—from startups to nation-states—are racing to adopt AI-driven technologies at unprecedented speeds, fearing that “if we don’t,…
Recent Posts
- Arm To Seek Retrial In Qualcomm Case After Mixed Verdict
- Jury Sides With Qualcomm Over Arm In Case Related To Snapdragon X PC Chips
- Equinix Makes Dell AI Factory With Nvidia Available Through Partners
- AMD’s EPYC CPU Boss Seeks To Push Into SMB, Midmarket With Partners
- Fortinet Releases Security Updates for FortiManager | CISA