CTO Fidelma Russo On Customer Choice, Trust And Why HPE Now Has Its Own Virtualization Capability
- by nlqip
HPE CTO Fidelma Russo said HPE decided to enter the hypervisor market with its own open-source KVM hypervisor in the wake of the response from customers and partners to changes in licensing, pricing and product after Broadcom’s acquisition of VMware.
Hewlett Packard Enterprise Executive Vice President and CTO Fidelma Russo says HPE’s new KVM-based hypervisor is a response to partner and customer demand for a virtualization partner that is “consistent,” “trustworthy” and one they can have a “relationship” with going forward.
Russo told CRN in an exclusive interview at HPE Discover that HPE decided to enter the hypervisor market with its own open-source KVM hypervisor in the wake of the response from customers and partners to changes in licensing, pricing and product after Broadcom’s acquisition of VMware.
“We would never have done this if we were not being pushed by customers,” said Russo, who is also general manager of HPE’s Hybrid Cloud business. “We have a lot of choices in our portfolio for customers. What customers said to us was, ‘We hear you that you have a lot of choices but what we want is a trusted company that has been around for a while, that has been our trusted vendor and we want you to give us an option.’”
The new HPE KVM hypervisor virtualization capability provides customers and partners with a “choice” in the virtualization market, said Russo.
“We have a new OEM agreement for those customers that want VMware from us,” she said. “It is all about choice. But we do believe that with our virtual machine technology it will be a good play for the channel and a good play for our customers.”
The new KVM hypervisor virtualization capability is being previewed at HPE Discover this week and will be available as a beta program in the summer with the release scheduled first as part of HPE Private Cloud Business Edition in the second half of 2024. “We have gone back to a number of these customers and they are signing on,” said Russo.
HPE has not finalized the pricing on the virtualization capability but has decided not to use the Broadcom-VMware per-core pricing model that has resulted in price increases for many customers.
“As people move to per-core pricing, the price is just going through the roof,” said Russo. “And we don’t believe that is a reasonable pricing methodology. I don’t have the pricing done yet but it not going to be per core.”
Russo said there is a big opportunity for partners to help customers rearchitect their virtualization path with the advent of GenAI and to help customers refresh their IT estate to make their VMs more efficient.
“That is a huge refresh opportunity for everybody,” she said. “You know when the price goes up for everything people look at it and they go how do I refresh? How do I consolidate my estate? How am I going to refresh? That’s a huge opportunity for partners if we work together.”
Why did HPE decide to enter the virtualization market with its own hypervisor virtualization capability?
Since the [VMware] deal closed with Broadcom we’ve been hearing from customers because it’s really customers first and then it’s the channel. Because at the end of the day, the people who we all serve are the customers, whether it’s going through the channel or it’s going direct.
Our partners talk to the customers. We talk to the customers and all we’ve heard is they are all looking at what do they need to do now.
It’s clear there are parts of their [IT] estate that are always going to stay on VMware depending on the application that they have. And there are parts of their estate they are looking at [with regard to] should they move? And if they move, what do they move to and how do they move and where do they move to? And do they move with a partner and how does the partner help them? And can the partner even help them anymore?
So when we looked at all of that and what we needed to do, we have always been about choice. One of the things we have is a tool called Cloud Physics [a SaaS-based cloud assessment tool that analyzes on-premises IT environments and makes recommendations on application modernization and cloud migrations].
That is an underutilized tool that came from a small acquisition that we did years ago [in 2021] and our partners use it as well. You can go in and look at your [IT] estate.
The first thing you have to do is [look at] how many of your applications do you really need and do you shut some down? That is No. 1. No. 2 is how many of them do I move to a more modern architecture? And No. 3 is then can I put some of them on a different stack?
Once you have kind of done that, we looked at it and it is still all about choice. We have talked to a lot of customers and a lot of partners and they have kind of said to us, ‘Hey HPE, you have been all about choice. But what about you? What can you offer us?’
We have had Ezmeral containers for quite a while. And we looked at it and we said we have a history, quite a lot of software, and said we are going to invest in taking an open-source KVM [as a hypervisor]. We have cluster orchestration technology and we are going to complement our Ezmeral containers with Ezmeral virtual machines and offer that on our private cloud as well.
In addition to that, you may have seen the blog that came out from VMware. So we do have a new agreement with VMware. You will see on the [Discover show] floor a VCF [VMware Cloud Foundation] co-engineered system.
We have signed up with VMware for a new agreement. So we have a co-engineered system with VCF. It will be a stand-alone system for those customers that want a VCF co-engineered system. And we have a new OEM agreement for those customers that want VMware from us. It is all about choice. But we do believe that with our virtual machine technology that it will be a good play for the channel and a good play for our customers.
What are the economics of the HPE virtualization capability versus Broadcom-VMware?
We don’t have the pricing done yet, but we are not going to do per-core pricing.
Why have you decided not to do per-core pricing?
If you look at processor technology today, basically with CPUs they are just putting more and more cores in it. The price curve with the same amount of use you are getting more and more cores. As people move to per-core pricing the price is just going through the roof. And we don’t believe that is a reasonable pricing methodology. I don’t have the pricing done yet but it is not going to be per core.
How would you characterize the HPE virtualization economics versus the Broadcom VMware per-core pricing?
It will be more economical.
VMware is a very good partner of ours.
We are not about bundling all of the software together. We are about you may want to have our storage separate from our virtual machines, separate from our manageability with OpsRamp, separate from our disaster recovery with Zerto. So you can pick and choose the different pieces of software from us depending on what you want out of the stack. So it is just a different view of the world.
How important is that choice being able to pick and choose versus the Broadcom- VMware model [reducing the number of SKUs available to customers and partners]?
We are probably going after just a different part of the enterprise. We have always been about choice and flexibility.
HPE has said enterprise can achieve up to five times when they adopt HPE private cloud to modernize their virtualized IT estate. What does that consist of?
We believe that as you upgrade to newer hardware, taking into account savings with the virtualization software and consolidating on to a platform where, in fact, you are not paying also for an integrated stack that you can get up to five times savings.
How important is the open-source KVM to those economics, and what is the difference between open-source KVM and vSphere?
vSphere 20 years ago was based on open source.
In the end our overhead is not based on running the platform below [in the server]. We have the cloud control plane so a lot of the management is up in the cloud control plane and not down in the server below. We take less processing power in the server below. That is where some of the overhead is saved.
What is going to be the difference in cloud virtualization for the future that HPE is bringing to market versus Broadcom-VMware?
Depending on what you are trying to do will determine what is the right answer for you. As you are looking at your application landscape these days and really looking at how your workloads are going to shift over time in cloud-native and especially AI workloads, it is getting to be very important to think about how you pick the runtime that is suited for your workload going forward. That is why choice is important. That has got to do with containers, bare metal and VMs.
Not all workloads are going to run on VMs going forward. You are going to start to see this especially with the emergence of AI.
As a CIO looks forward into what they are going to do with their landscape, they really have to start to think about are they making a decision that is going to allow them the flexibility to have multiple stacks or multiple runtimes or are they making a decision about whether they’ll be locked into something? That is a critical decision as you are kind of looking at refreshes.
Are we at a turning point where the old virtual machine model has run its course?
No, because this industry doesn’t change very quickly. It is very expensive to rewrite applications. So if you look at rewriting applications it is an extraordinarily expensive what I would call ‘hobby.’ It doesn’t happen.
So virtual machines are critically, critically important, and many people take containers and they put them in virtual machines to get the cluster management out of virtual machines, to get the availability out of virtual machines.
But as you look toward AI going forward, you’ll see more AI applications sitting on containers and bare metal. So you start to see things kind of co-habitating next to each other. So you need to start to look at how do you have these stacks that can live together and how do you choose private clouds or public clouds where you can get choice so you can have flexibility to be able to have a modern infrastructure and then be able to observe it and manage it in a way that is economically viable because people have limited budgets.
How will this stack up versus other offerings in terms of cost and price/performance and efficiency within a container AI-heavy environment ?
HPE Private Cloud AI is another entry in our private cloud portfolio. And you’ll see over time Private Cloud AI will have Ezmeral containers in it. It will have Ezmeral virtual machines. And you can imagine a world where you are running Private Cloud AI with some processing power left over and you might want to run some virtual machines.
Can you characterize how you feel about this new virtualization choice for partners and customers?
I think for HPE it is a moment where the technologies we have had [are coming to the fore]. Sometimes we have been ahead of the curve in technologies and we haven’t really realized the power of them. With Ezmeral I think we didn’t realize the power of what we had. We were a little bit too early. And now with the emergence of GenAI and actually with some of the changes Broadcom [has made] have basically made containers and the orchestration software we have more relevant.
It is kind of like our time is now. From a partner perspective, we have always been a partner- friendly organization, but we now have a lot of value to add to our hardware.
So instead of thinking of us as a software organization, I think of these as how do we bring value-add to our hardware, to make our hardware way more valuable to our partners, so the partner community has a much stronger play for our customers.
You’ve got the infrastructure. We don’t have to go all the way up to the application. We go up the stack from the infrastructure buyer to make ourselves way more relevant, and make that IT user way more relevant now as they are talking about what are they going to do about their virtual machine problem. What are they going to do about being relevant to the GenAI conversation? You can’t really have the line of business all running and figuring out what they are going to do with GenAI because you have to have guardrails. You have to think about ethics and standing up applications for people.
I am really excited. I think it is a huge opportunity for partners.
How big is the virtualization market opportunity for HPE and partners?
There are two pieces to this. There is the how big is the virtual machine market or the hardware part of the virtual machine market, and then there is how big is the enterprise part of the GenAI market.
I think in terms of the enterprise parts of the Gen AI market I think it’s billions [of dollars]. I don’t think we know. I think it’s one of these markets that if you put a number on it I think we’ll be proven wrong.
I mean just look at how fast Nvidia is growing and they’re really growing in selling to the people who are building [LLM] models and driving the kind of training piece of [AI], but the enterprise part of the market is just beginning to start.
Then behind all of that is how do I refactor my enterprise and what do I do with making sure that I refresh my [IT] estate, to consolidate, to make my estate more efficient running VMs. That is a huge refresh opportunity for everybody.
You know when the price goes up for everything people look at it and they go how do I refresh? How do I consolidate my estate? How am I going to refresh? That’s a huge opportunity for partners if we work together.
Are customers asking for this choice?
I have been in this industry for a long time. I have never been in as many conversations in the last few months where I have walked into conversations on virtualization and been asked, ‘What is your opinion? What should we do? How are we going to change this? What are the choices?’
Can you expound on that? What are you hearing from customers?
We would never have done this if we were not being pushed by customers.
We have a lot of choices in our portfolio for customers. What customers said to us was, ‘We hear you that you have a lot of choices but what we want is a trusted company that has been around for a while, that has been our trusted vendor and we want you to give us an option. So [HPE President and CEO] Antonio [Neri] and I were like, ‘OK.’
We said we have these other partners and they said, ‘No, we want you to do something.’ And we were like, ‘OK, we will come back [to you].’
We have gone back to a number of these customers and they are signing on … not for every application but for different use cases: test and dev, certain enterprise use cases. Some of it we had to do ourselves for some of our internal software that ran on virtual machines in our internal environments.
When did the groundswell hit in terms of the response from customers for an HPE hypervisor?
In January-February. We had been doing work for internal use cases.
This is absolutely and completely in response to customers, big, big customers.
How important is HPE’s role as a trusted partner for customers?
In terms of trust as well I think what their view is they want a partner who is going to be there for the long haul and that they can see is consistent, trustworthy, has scale and one that they have a relationship with. And that is what they are looking for.
What are the benefits for partners who have for years done their VMware licensing through HPE?
It has been very complicated for partners. It’s been very complicated for everybody in terms of this.
The whole partner landscape changed within a month [after Broadcom acquired VMware].
So we had numerous conversations with partners on what is the answer? How do we do things? So our answer is in response to that.
Now we are keeping it narrowed to the private cloud lane because we feel that is manageable and we want to make sure we have a good experience and that we can deliver an experience we can stand behind.
What is the message in terms of the experience HPE will provide in terms of price?
We are not ready with this until the middle of the summer. We weren’t going to come out publicly with this until the summer but there was so much demand [that we decided to announce it at HPE Discover].
What is the call to action for partners that want to take advantage of this?
The call to action is we will have a beta program later in the summer and we will be signing up partners for a limited beta program.
The announcement for this is actually at Partner Summit. It isn’t at Discover. That tells you how important this is for partners.
How does this integrate with Nvidia and HPE Private Cloud AI? And what are the implications for the future path to Private Cloud AI for partners and customers?
The reason actually we can do these things so quickly is the [HPE] private cloud control plane. With the private cloud control plane in PCBE [Private Cloud Business Edition] we have this pluggable architecture. And in this pluggable architecture we were able to plug this new virtual machine technology into PCBE. It has this hypervisor and it is controlled from up in the GreenLake cloud platform. It has a piece up in the GreenLake cloud platform. It has a piece down in the PCBE.
We did the same thing with the container technology that runs [HPE] Private Cloud AI. It is basically the same private cloud control plane. So when you buy into PCBE and PCAI [HPE Private Cloud AI]. So this [virtualization capability] is two clicks. PCAI is three clicks.
PCAI just has the higher-level software on it, the models, the Nvidia software stack, and the data lakehouse, which is basically Ezmeral data fabric which allows you to bring in all of your external storage so you can feed your models.
So this is a new hypervisor in the market from HPE?
So we’re taking Ubuntu open source and we actually have a number of patents pending on a cluster orchestration layer that has high availability and allows you to run multiple KVMs. So you can have high availability within the control plane. So if you are disconnected from the cloud you can continue to run. Also, you don’t have to use any of your resources down in your servers to manage those KVMs. You do it all up in the cloud. That is how we get our five times TCO.
So now customers have a choice in the hypervisor approach with the new HPE virtualization capability versus Broadcom-VMware, Nutanix, etc.?
You’ll see a picture that will show you us, Nutanix, VMware and others.
On the Nutanix stack you have got captive storage and networking, hypervisor and management software. We have got our external storage but we have also just released software-defined storage on AWS, for instance. We have our networking. And now we have our hypervisor. And we have our management observability software with OpsRamp and we have disaster recovery with Zerto.
So we have quite a few of the pieces of the stack that you can buy together or you can buy them separately.
What is the biggest takeaway in terms of how future-oriented this virtualization capability is?
So you have from HPE an end-to-end stack that is multi-cloud, multivendor and AI-ready.
With OpsRamp you will see the natural language copilot we developed is not just on PCAI managing your [AI] models and providing observability on the models. It is also integrated with CrowdStrike providing security on the models. That also provides observability across other vendors as well. We have come a long, long way.
Why should partners and customers choose the HPE hypervisor capability versus the other offerings on the market?
Why us? Because you have choice. You can either pick our stack end to end or you can pick the pieces that you need. We are not going to lock you in. So you can have hybrid cloud, multi-vendor and multi-cloud with the services around it.
What is the biggest difference between what HPE is doing with HPE Private Cloud AI versus what Cisco and Dell are doing with AI?
It is an architecture and you know what, architecture matters. That is what it is. And it is that private cloud control plane that all allows us to snap these things in and you can bring these things up in three clicks.
How dramatic is the difference between HPE Private Cloud AI and the public cloud with what you have called four to five times more cost-effective and up to 90 percent more productivity?
It is actually pretty simple. Here is what happens. What happens when you start in the public cloud you start to run and you are paying by the drip and then you have to extrapolate out how much this is going to cost you. When you do that and you look at that over time you begin to realize the amount of money you are going to pay is astronomical.
You will hear from our CIO actually who started in Azure and is now bringing his chat application back on-premises and there are a number of these cases that are happening.
When you train the model and then you want to run inference and you have to run inference and keep running the inference and fine-tuning the model you are continuing to have to repay on the drip.
Whereas if you have it in a CapEx or you own the infrastructure on-premises, it is a lot cheaper. It is the same as cloud for a static workload that isn’t bursting.
The other piece of AI is because of privacy and security people have concerns around their data, so AI is actually really a workload that is really hybrid. It is having people really look at on-prem again.
Source link
lol
HPE CTO Fidelma Russo said HPE decided to enter the hypervisor market with its own open-source KVM hypervisor in the wake of the response from customers and partners to changes in licensing, pricing and product after Broadcom’s acquisition of VMware. Hewlett Packard Enterprise Executive Vice President and CTO Fidelma Russo says HPE’s new KVM-based hypervisor…
Recent Posts
- Arm To Seek Retrial In Qualcomm Case After Mixed Verdict
- Jury Sides With Qualcomm Over Arm In Case Related To Snapdragon X PC Chips
- Equinix Makes Dell AI Factory With Nvidia Available Through Partners
- AMD’s EPYC CPU Boss Seeks To Push Into SMB, Midmarket With Partners
- Fortinet Releases Security Updates for FortiManager | CISA