Microsoft GM Sees ‘Huge’ Partner Opportunity In AI Data Security, Risk Management
- by nlqip
‘There’s only so much you can learn by watching others,’ Microsoft GM Herain Oberoi said.
Herain Oberoi, Microsoft general manager for data security, privacy and compliance, called work around artificial intelligence and security “a huge opportunity for partners” — especially in areas such as helping customers understand AI risks, building applications that address those risks and building dashboards and reports around new attack surfaces.
Oberoi highlighted these approaches in an interview with CRN as ways to help chief information security officers (CISOs) and similar leadership roles in their communications with company boards and CEOs pushing for AI innovation quickly.
And for solution providers with customers more hesitant to adopt AI products, Oberoi’s advice was to focus on the risk of getting to the AI experimentation phase after customers’ competition and falling further behind.
“We acknowledge this (AI) has the potential to be extremely disruptive,” Oberoi said. “And one simple way to mitigate risk is, OK, you don’t need to roll things out in production. But if you’re not experimenting, you’re not learning. And if you’re sitting on the sidelines watching others experiment, your learning is only going to be diminished. There’s only so much you can learn by watching others. The real learning comes from doing.”
[RELATED: Microsoft’s Joy Chik On ‘Acceleration’ Of Internal Security Across Identity, Network, Supply Chain]
Wayne Roye, CEO of New York-based solution provider Troinet, told CRN in a recent interview that his recent strategy around talking to customers about security and AI is to focus on data governance.
For Roye, data governance is more proactive, addressing “what piece of information can AI possibly touch and see that I (the customer) may not be aware of.”
Oberoi spoke on areas of Microsoft’s trustworthy AI initiative that are especially relevant to solution providers within the Redmond, Wash.-based tech giant’s 400,000-member ecosystem.
Microsoft has been announcing a variety of security-focused updates to its AI portfolio in recent days, including a new security architecture for Copilot+ PCs’ Recall feature, a correction capability in Azure AI Content Safety for fixing hallucination issues in real time
Notably, the GM returned to Microsoft last year to take up his current role. He previously spent about a year with identity rival Okta as senior vice president of marketing and strategy and about four years with cloud rival Amazon Web Services as GM of databases, analytics and blockchain marketing.
His previous Microsoft tenure lasted about 12 years, leaving the tech giant in 2017 with the title of senior director of product marketing for Microsoft’s big data and machine learning portfolio, including Azure Machine Learning, according to Oberoi’s account on Microsoft-owned social media network LinkedIn.
Here’s more of what Oberoi had to say on the partner opportunity in AI data security.
What’s your message to Microsoft security solution providers on AI security?
There is a huge opportunity for us and our partners collectively to just help customers around this area. It’s relatively new. People are getting their heads wrapped around it. It’s changing every week. … We’d feel, I’d say, in some ways a responsibility to help drive as much clarity as we can and provide frameworks and best practices.
And, in some cases, tools and capabilities to help both partners and customers. … Everyone’s deploying or at least evaluating AI at this point. And so the way it’s playing out in organizations is you’ve got the board asking the CEO, ‘What’s your AI strategy? How are you going to either disrupt or reinvent?’
And then the CEO is looking at the C-suite. And within the C-suite, in my world, specifically, we talk to the chief security officers.
And in some cases, they’re also responsible for risk more broadly. And so they’re trying to get their arms around … How do I frame the risk? They don’t want to be viewed as the people that say, ‘No.’ They want to be the enablers.
And so part of what we’ve been trying to do with this ‘trustworthy AI’ initiative is to start to help CISOs (chief information security officers) and also just risk leaders put their arms around, How do we think about the risk?
What should partners know about the trustworthy AI initiative?
It’s really got two pieces to it. It’s got the commitments that we make as a company when we build our AI systems, to say this is why our AI systems are secure, private and safe.
And those commitments include things like what we’ve been talking about with our Secure Future Initiative. Things like our privacy principles, where we say, your data is your data, and we’ll never train our foundation models with your data–that sort of thing.
And then also responsible AI principles, which include things like, safety, protection against harmful content, bias–things like that.
And so the frame of thinking about things in the context of both commitments we make and then capabilities we give customers so that then they can do the same thing for the apps they build– that’s been helpful.
Sounds like this initiative is an opportunity for partners?
In all areas (of the initiative), there is opportunity for our partners because there’s opportunity in the context of helping customers understand the risk, and then helping customers build either their apps that deal with those risks in a very, very specific way.
Just to touch on security … when we think about what are some of the security risks that organizations have to think about–obviously, data security, governance, compliance that comes up a lot–AI applications are inherently very data centric.
And one of the things I like to help customers with the framing is, if you think about the world of security, for the longest time, you’ve got existing, what we’ll call, attack surfaces … (which) include things like your application, your data, your endpoints, your network, your cloud.
And so you have products and capabilities that provide, security against these different things. With AI systems coming in, you now have these new attack surfaces that you have to think about.
Things like the AI model itself, the actual prompts and the responses. Is there sensitive information coming out of my response that I have to think about … orchestration frameworks like LangChain.
And so if you think about the world of security posture management hey, did my version of LangChain include any vulnerabilities? How am I going to know that?
Then there is the data itself. The data you use to train the model, the data you use to fine-tune the model, the data you use to ground the model to protect against hallucinations. All that data is data that you have to protect.
And so the new types of attacks that you see are things like prompt-injection attacks in the prompts and response, or jailbreak attempts or data poisoning … those examples are the (types of) risks that we are thinking about.
How should partners think about the AI regulatory environment?
You’ve got all these new regulations coming up. You’ve got the EU (European Union) AI Act in EMEA (Europe, the Middle East and Africa).
In the U.S., we’ve got more of a NIST (National Institute of Standards and Technology, part of the U.S. Department of Commerce) AI risk management framework. … And then the regulations are changing, to my earlier point.
And so customers are saying, Help me keep up with the regulations. And how do I know that the systems I build today are going to be compliant with the regulations in the future? … We (Microsoft) are acknowledging this is going to be how to think about the risk.
This is how to bucket them so you can get your arms around it. And then saying, the approach we want to take to tackle this… we’re trying to be prescriptive here.
You’ve got to think about–there’s a preparation phase, then there’s a discovery phase around the risk, then there is a protection and governance (phase). And I would say preparation and discovery are sequential. But protection and governance go hand in hand.
Preparation means, have you done the work to have comprehensive identity and identity governance aspects and control. Products like Entra (formerly known as Microsoft Azure Active Directory or Azure AD) would be an example of capabilities we give to help with that.
Or the one that we’re seeing a lot–particularly in my world–is have you done the work to think about your data classification and labeling? Your metadata management? Your lineage, all of that. … That’s what (Microsoft data loss prevention tool) Purview enables.
How about the role of partners in the discovery, protection and governance phases?
Discovery means, do I have visibility into the AI apps being used in my organization? And do I have visibility into sensitive data that’s being shared through the prompts and responses of those AI apps in my organization? And then who’s got access to what?
So even just understanding how much is being shared and having that visibility. … This is a huge opportunity for partners because what partners came to us and told us was, We’ve been building dashboards for our customers so they can discover and understand what sensitive data is at risk, or which applications are being used.
And once we expose them to that dashboard and they see the risk, they can’t unsee it, they have to act on it. And so once you have to act on it, you can then start to break down the problem and actually start to make progress.
Partners play a key role here because you’ve got a lot of different ways in which you can build custom reports or custom dashboards that are domain specific or industry specific just to help leaders within organizations understand, What is the surface area of the risk I’m looking at?
And then we get into the protection and governance phase. Protection includes things like, now you’re building policies like data loss prevention, policies and controls. You’re putting in controls for data security posture management to understand, Do I have vulnerabilities in my configurations?
You’re doing things like threat protection, but extending threat protection to AI workloads against things like prompt injection, jailbreak and all of that.
And then finally, on the governance side, we’ve had capabilities in Purview, for instance, that we called Compliance Manager where you can basically do a self-assessment against a particular regulation and say how compliant am I with this particular regulation?
And then it gives you some recommendations on, Here are the next steps you need to take to improve your compliance score. And so pulling in regulations like the EU AI Act, the NIST AI risk management framework, those are things we’ve been doing to make sure that the regulatory support in the products include all the latest and greatest changes that are happening in the AI world.
Can you tell me more about the dashboards?
The dashboards are really an extension of … the way we think about both security posture management as well as data security posture management.
And so data security posture management is not only specific to AI. … But you can hone in on, OK, tell me about specifically my prompts and responses from my GenAI (generative AI) apps.
And tell me, is there harmful content? Or is there sensitive data in that? Something like Defender for Cloud (Microsoft’s cloud security posture management (CSPM), cloud workload protection (CWP) and development operations (DevOps) security tool) gives customers today the ability to see what my application risk is for any application.
It doesn’t have to be an AI application. But now you can specifically drill into and say for GenAI applications, I might have additional risk factors that I consider that might change the risk score of those applications. … Where we can extend our existing products to AI workloads, we’re going to do that, because it’s easy.
But then, to my earlier point, in some cases, you’ve got these new attack surfaces, these new threat vectors where (it’s) net-new capability we’re building. … We’ve been pushing out a ton of capabilities over the last year. The first place we did it was with regard to Copilot for Microsoft 365.
That one, from a partner perspective … a lot of Purview-attached M365 Copilot and then now extending that to custom AI apps that customers will build on, let’s say, Azure, Azure AI, Azure OpenAI.
Both those areas are (where) we’ve been extending existing controls, building more controls. And then the visibility part is really about giving them a place to start.
Any advice for solution providers who have a customer saying it is still too early for them to adopt AI, they want to see the rate of change settle some first?
Here at Microsoft, we have a strong culture of growth mindset, which means part of it is, if you’re not experimenting and failing, you’re not learning. … We acknowledge this (AI) has the potential to be extremely disruptive.
And one simple way to mitigate risk is, OK, you don’t need to roll things out in production. But if you’re not experimenting, you’re not learning. And if you’re sitting on the sidelines watching others experiment, your learning is only going to be diminished. There’s only so much you can learn by watching others. The real learning comes from doing.
Frankly, I don’t know if I’ve met any customer that’s not doing some kind of experimentation. And part of it is also, as you do the experimentation, you learn about it, you get a better handle on the risks as well.
It’s more just a function of–how quickly do you want to move? And within the industry that you have, even if it’s an industry that may be historically maybe not one that’s technology forward.
I was speaking to a CISO from a mining company … one of his challenges was, the technologies we build are supposed to sit in those heavy machinery for 50 years. They are supposed to work for 50 years. So they just don’t move as fast.
But even in that situation, they’re looking at AI and saying how do I leapfrog what I’m doing today? I would definitely encourage experimentation mostly from a standpoint of, If you’re not doing it, you’re not going to be learning. And failure is part of the process.
Would you hesitate to use the term ‘future proof’ with AI given how fast the technology and regulatory environment are changing?
I don’t actually use the term future proof partly because of the point of we don’t know what the future is going to look like. … You can be optimistic about the whole thing or pessimistic about it.
And obviously, here at Microsoft, we are very optimistic. And … we think this can truly enable change in places that didn’t have it before. Down to increasing the GDP of countries.
But in order to have that, the mindset is more one of learning and experimentation and creativity rather than fear and risk. The risk mitigation part is extremely important, and that’s our job on the silicon and the security side. But really … how do we become enablers for this potentially game changing capability that we’ve got? And one of the ways to do it is to start working with it and learn from that.
Source link
lol
‘There’s only so much you can learn by watching others,’ Microsoft GM Herain Oberoi said. Herain Oberoi, Microsoft general manager for data security, privacy and compliance, called work around artificial intelligence and security “a huge opportunity for partners” — especially in areas such as helping customers understand AI risks, building applications that address those risks…