Palo Alto Networks To ‘Change How Cybersecurity Is Done’ With AI Launch: CPO Lee Klarich
- by nlqip
In an interview with CRN, Klarich says the company’s debut of new AI-powered products shows how Palo Alto Networks is combining GenAI with machine learning to make security more predictive and real time — compared to other offerings that are ‘too simplistic.’
The much-awaited debut by Palo Alto Networks of a new suite of AI-powered products capitalizes on the distinctive approach the cybersecurity giant is taking in its use of GenAI and machine learning, Palo Alto Networks Chief Product Officer Lee Klarich told CRN.
The new capabilities will enable Palo Alto Networks partners and customers “to start becoming more predictive about how attacks are going to evolve in the future,” Klarich said during the interview — ultimately helping to meet the growing need “for cybersecurity to be real time.”
[Related: Here’s What 20 Top Cybersecurity CEOs And CTOs Were Saying At RSA Conference 2024]
Last week, Palo Alto Networks unveiled its new Precision AI capabilities — embedded throughout the company’s portfolio — along with three copilot assistants and new tools for protecting the use of GenAI itself. The rollout was the most substantive introduction of GenAI-powered capabilities by the company to date and had been under development for more than a year.
With the launch, “we think we have taken a more holistic approach” than other vendors in terms of applying GenAI and machine learning to cyber defense, Klarich told CRN.
By contrast, some offerings by other vendors have been “a bit too simplistic,” he said. “I don’t think [they are] going to provide sufficient value to customers to really change much in how cybersecurity is done.”
Klarich spoke with CRN in San Francisco last week, a day after the Precision AI announcement, which coincided with the RSA Conference 2024.
During the interview, the longtime Palo Alto Networks product leader also discussed why large companies may have an advantage over early-stage startups on GenAI — as well as speaking about the company’s push to drive consolidation on its broad security platform. The “platformization” effort at Palo Alto Networks has frequently been “misunderstood” to mean that integrating with third-party tools is not a priority for the company, Klarich said.
The reality is just the opposite, he said: “This actually drives an increased need for us to enable integrations with third parties.”
What follows is an edited portion of CRN’s interview with Klarich.
From your perspective how different is Precision AI, compared to everything else that’s out there for AI-powered security?
Our approach with Precision AI is to harness the best of the different AI technologies, while minimizing the inherent weaknesses that come with each. So Precision AI does leverage our history in machine learning and deep learning. And part of that is to simply to say, don’t throw those out with the advent of generative AI. They are still the most precise ways to do security. What generative AI can add, though, is going beyond what we’ve seen before — and to start becoming more predictive about how attacks are going to evolve in the future, which machine learning is not particularly good at. But generative AI can actually now start to provide lots of benefits. And so by combining those together, it puts us in a position to be more proactive in being able to understand how attacks are going to evolve, and to be able to detect and prevent those when they do.
You took more than a year preparing for this AI launch — what are some of the key aspects that really took an investment of time and effort to accomplish?
Precision AI is the continued advancement with machine learning and deep learning, but now leveraging generative AI in a more predictive fashion from where threats will evolve. The launch of “security by design” with AI access security, ASPM [application security posture management] and AI runtime security — these are products. Products take time to develop. These are big efforts within the team in order to get to where we are today in getting ready to fully roll those out to customers. And the copilots, the generative AI piece is sort of the more obvious one. [But] the important realization was, how do we leverage automation with generative AI, such that our copilots can actually help our customers to take action? That was one of the big insights we had as we were working through the development last fall, and deciding what we consider to be a complete copilot and when we would be ready to launch those.
Where do the other approaches fall short of what you’re doing with Precision AI?
I’m sure there’s lots of different approaches. Some of the ones that I’ve seen, though, I think had been a bit too simplistic — a bit too, “[Here’s a] large language model that’s really cool. I connected it to my product and now I have a prompt added to my UI.” I don’t think that’s going to provide sufficient value to customers to really change much in how cybersecurity is done. That’s not to say that the generative AI component is not incredibly important. It’s just that we think we have taken a more holistic approach by thinking about how we combine it with other technologies — in order to provide not just greater levels of accuracy, but to enable copilots, for example, that can actually help take action, not just answer questions.
Could you say a bit more about how that makes your copilots different from other copilots that are out there for security?
The difference is going beyond knowledge sharing, or knowledge answers, and taking that to action. So [during the launch event] when I talked about this, I was connecting the dots between simplifying cybersecurity and getting cybersecurity to a point where it’s autonomous. This is a step in that direction toward autonomous security — because the more security can simply take action automatically, the less human intervention that’s required for these repetitive tasks.
On Precision AI, how might the embedding of this technology in your products prove useful to customers?
It helps that we’ve long had this view of leveraging AI in our products. It started over 10 years ago with malware analysis and starting to use AI there. It helps that we had already started to embed this in Cortex with how we both prevent attacks — as well as detect attacks in Prisma Cloud with how we analyze data. So we had laid the groundwork for embedding AI across our three platforms. And so we were able to leverage that foundation when generative AI came out, to then start to augment what we have been doing in enhancing those capabilities.
In terms of real-time detection, how important is that to the industry?
My belief is, we are seeing an increase in the number of novel attacks on a daily basis. If you go back far enough, a new attack would come out, and then you would see basically the same attack used for days, weeks, even months before the attacker would have to change their code and carry out a new attack. And that change means the traditional approaches of signature-based or database-based detection and prevention are outdated. We see this in the data. The 2.3 million new, unique attacks, detected on a daily basis — that’s 2.3 million “patient zeroes” that would happen if we weren’t able to be inline preventing the first attack that we saw. So that’s the power of Precision AI applied to inline traffic.
There’s a second piece to it — attackers have started to get smarter about how to hide their attacks if you’re not actually scanning the real user traffic, as opposed to simulated traffic. Probably the best example of this is in URL filtering. URL filtering traditionally was more sort of productivity-focused. It then started to turn into a security-focused technology — can you analyze websites to see if they have malicious content, or web-based phishing attacks? So URL filtering started to become more of a security function, not just for productivity. Well, then what you started to see was, attackers would create websites that would respond with good content to crawlers. But they would respond with attack traffic to real users. The traditional approach to URL filtering is to crawl websites and analyze the response. They’d see good traffic come back because the attackers had figured this out, and mis-categorized the website as a “good” website versus malicious. Or they’d put phishing behind CAPTCHAs. And so if you crawl it, you’d see a CAPTCHA. But if you see real user traffic, you’re going to see the actual phishing content come back. And so, the second part of being inline is, not only preventing the attacks for the first time — but actually seeing the real user traffic in order to have more accurate detection capabilities.
Being real time, that’s really what it’s going to all be all about, just because of how fast-paced the attacks are getting?
Yes. Our first network security services were updated on a daily basis, every 24 hours — which at the time was super fast. And then we saw attackers getting faster and faster. So we went to one hour. Then we went to 30 minutes, five minutes. Then we even got to the point where it’s a few seconds. And even at a few seconds, we said, “We need to be faster.” That’s both how quick attacks will change — and the number of new and novel attacks where we want to eliminate patient zero.
If everyone was using this real-time approach that you’re describing here — how big of a difference would that make, in terms of reducing the amount of attacks that are successful?
It’s a very interesting question. It’s a difficult “what if” analysis to do. We have some approximate data on how long it typically takes from when we first detect an attack, using our inline Precision AI capabilities, to when we see general updates of threat databases or threat intelligence. And based on our rough analysis, it’s about a 60X improvement in speed of knowing about new attacks.
That would translate to a lot of the attacks not being successful?
It translates to [a situation of] being able to prevent it even the first time we see it. [But it would also prevent] the second and third attacks from getting through, before databases are updated, signatures are updated, and then protections are put in place. It’s our belief that that delta is already very important — and in the future it’s going to be even more important, where AI will allow attackers to modify their attacks on more of a continuous basis.
And so if you don’t have something like this, then it’s going to be even more of a problem than it is now?
That’s our perspective.
How big of a problem is that, would you say?
There’s two data points. There’s the one that we shared [at the launch event], which is the 2.3 million new attacks on a daily basis. That gives sort of a quantification of the number of new things that we’re seeing and the potential risks that that opens up. If, as an industry, we’re not able to detect and prevent the first attack that we see. The second then requires a forward-looking view — which says, what do we believe is going to be true in 12 months, 18 months or 24 months, in terms of how quickly attackers will leverage AI to create these more continuously modified attacks? I think it’s not a big stretch to assume that this is on more of an exponential curve, in terms of the scale that we’re going to be dealing with.
I imagine your perspective would be that everyone’s going to need to adopt something like this in the future, whether it’s Precision AI or otherwise?
Absolutely. It’s very much like [how] the network security industry over time had to adopt technology similar to what we did with App-ID — in terms of how we identify and control applications — [and] User-ID, which was how we identify and apply policy by user and user role. And then third was, how the industry responded to integrating network security services into our next-gen firewall. I think this is of the same scale — in terms of, any network security product over time is going to need to have an ability to apply AI-based detection and prevention inline.
What is an industry-wide issue that you’ve been talking to people about a lot this week, maybe not specific to Palo and Precision AI?
Real-time [security]. If I were to look across cybersecurity, the biggest topic from my perspective is the need for cybersecurity to be real time. There are too many places where technology has not kept up with the pace of attackers. And we’ve shared data on this, from initial attack to breach being, even just a few years ago, 40-plus days. Last year, based on our data, it averaged about five days. But we’ve seen attacks that have been in hours. And so that time window is the amount of time that companies have to be able to detect and remediate in order to disrupt the attack before it completes. We believe that that will continue to narrow, requiring security to be as close to real time as possible.
How optimistic are you that the market and industry understand that?
I’m optimistic, first and foremost, that companies and organizations are understanding this need. That’s a very important starting point. I’m also optimistic that we, as Palo Alto Networks, are in a position to be able to provide our customers with real-time or near real-time security outcomes. So if you take those two combined, I think that’s a very good foundation for, “This is possible.” And honestly, I hope that that permeates into the industry. Because I think cybersecurity is just too important not to be able to have this level of security for organizations worldwide.
Could you say a bit on how “platformization” is influencing your product strategy?
In the technology industry, cybersecurity is the most fragmented space. And that fragmentation then leads to customers typically having 50 or 100 — 100-plus is not unusual — different cybersecurity products. I think it’s under-appreciated how much complexity that puts on the end customer to fully deploy, integrate and operationalize all of these capabilities, without having gaps — either gaps in where cybersecurity is applied or gaps in how different products are integrated together. So from my perspective, platformization starts as a technical need for more natively integrated capabilities [where] the customer doesn’t have to figure out the integration and the utilization. That to me is the fundamental aspect of platformization. All the other things that we do — around how we approach customers with helping them from a deployment [and] commercial perspective — that’s simply to help them get to a platform-based approach more quickly. But the reason we do all of this is this underlying technical need, to simplify cybersecurity through more natively integrated capabilities.
Then do you feel like it’s not going to be adequate security for a customer, even if the products are well-integrated with third-party tools?
This does come up in customer conversations I have. This actually drives an increased need for us to enable integrations with third parties — which might be counterintuitive. But the reason for that is, if we’re going to be a platform for our customer, it becomes even more important that we are able to integrate with other third-party technology that they’ll need. And that’s been misunderstood as sort of the opposite. So across our network security platform, our cloud security platform and our Cortex platform, we actually have done more to open up the platforms — to make sure that where we do need to integrate with third-party technology, we can. [We’re doing that] to make it easy for customers that, if they are going to standardize on one or more of our platforms — where they need other technology, we’re going to enable that to happen. Because if we start to create friction for other third-party technology, then [customers] are sort of back to where they started, which is, “Hey, it’s hard to deploy and tie these different capabilities together.”
Do companies that are more established have more of an advantage in GenAI than during previous technology transformations — where startup disruptors were more advantaged?
I don’t think it’s just GenAI. I think it’s AI in general. AI sort of feeds on data. And so, if you’re in a position to have a greater amount of data, that’s [an advantage]. There was an interesting article I saw recently that talked about this — with a startup — that had challenges because they were trying to take an AI-based approach, but they didn’t have enough data to train the AI. So the AI kept going haywire. And so yes, in GenAI with the copilots, it helps to have a lot of data that can feed them because that allows us to do a better job of fine-tuning and testing and other things like that. The other thing I showed, on Precision AI, was 4.6 billion new events analyzed every day, 2.3 million new and unique attacks detected every day. Then that feeds back into 11-plus billion attacks blocked. And all of that becomes data to then retrain the models on how attacks are evolving. So the idea is — and we’ve seen this play out — the more data you have, the better you can train the models. The better you can train the models, the better they perform. The better they perform, the more widely they’re deployed. The more widely they are deployed, the more data you get. And it creates this virtuous cycle.
So for instance, OpenAI has trained its models on lots of data, but it’s not the kind of data that’s going to make a difference in stopping attacks?
Combing the internet for information is not the same thing as actually being able to see a nation-state threat actor try to carry out a DNS tunneling attack in this one place in the world. [Having specific] data is super important. That data then creates the next set of training data for the models to then detect the next variants of attacks that we see.
Going back to the AI launch, how significant of a milestone is this in the history of the company?
I believe it’s a big milestone. I also believe that milestones like this then need to be followed with a set of phase two, phase three, phase four execution. If you go back to the very first time we introduced machine learning into one of our products, it was a big moment — but then it was the next release, the next release, the next release, where we really honed and perfected that. I see the same thing happening here. This is a very big release across how we’re embedding Precision AI in our platforms, how we’re launching security services that help our customers adopt AI safely, the launch of the copilots — these are big releases. And what you’ll see from us in the coming months, and even coming years, is how we then continue to advance those and innovate on those as new areas of focus for the company.
Source link
lol
In an interview with CRN, Klarich says the company’s debut of new AI-powered products shows how Palo Alto Networks is combining GenAI with machine learning to make security more predictive and real time — compared to other offerings that are ‘too simplistic.’ The much-awaited debut by Palo Alto Networks of a new suite of AI-powered…
Recent Posts
- Arm To Seek Retrial In Qualcomm Case After Mixed Verdict
- Jury Sides With Qualcomm Over Arm In Case Related To Snapdragon X PC Chips
- Equinix Makes Dell AI Factory With Nvidia Available Through Partners
- AMD’s EPYC CPU Boss Seeks To Push Into SMB, Midmarket With Partners
- Fortinet Releases Security Updates for FortiManager | CISA