Formerly known as Wikibon
Search
Close this search box.

Breaking Analysis: HPE wants to turn supercomputing leadership into gen AI profits

With Rob Strechay & Andy Thurai

HPE’s announcement of an AI cloud for large language models highlights a differentiated strategy that the company hopes will lead to sustained momentum in its high performance computing business. While we think HPE has some distinct advantages with respect to its supercomputing IP, the public cloud players have a substantial lead in AI with a point of view that generative AI is fully dependent on the cloud and its massive compute capabilities. The question is can HPE bring unique capabilities and a focus to the table that will yield competitive advantage and ultimately, profits in the space?

In this Breaking Analysis we unpack HPE’s LLM-as-a-service announcements from the company’s recent Discover conference and we’ll try to answer the question: Is HPEs strategy a viable alternative to today’s public and private cloud gen AI deployment models, or is it ultimately destined to be a niche player in the market? To do so we welcome to the program CUBE analyst Rob Strechay and Vice President / principal analyst from Constellation Research, Andy Thurai. 

HPE Announces an AI Cloud

In 2014, prior to the split of HP and HPE, HP announced the Helion public cloud. Two years later it shut down the project and ceded the public cloud to AWS. At the time, HPE lacked the scale and differentiation to compete. 

The company hopes this time around will be different. Last week at its Discover event, HPE entered the AI cloud market via an expansion of its GreenLake as-a-service platform. The company is offering large language models on-demand in a multi-tenant service, powered by HPE supercomputers.

HPE is partnering with a Germany-based startup called Aleph Alpha, a company specializing in large language models with a focus on explainability. HPE believes that this is critically important for its strategy of offering domain-specific AI applications. HPE’s first offering will provide access to Luminous, a pre-trained LLM from Aleph Alpha what will allow companies to leverage their own data to train and tune custom models using proprietary information. 

We asked Strechay and Thurai to unpack the announcement and provide their perspective. Here’s a summary of that conversation:

The core of the discussion centers on HPE’s plans to utilize Cray supercomputing infrastructure in an ‘as-a-service’ model, aiming to make high-performance computing more accessible.

The following key points are noteworthy:

  • Strechay acknowledges HPE’s innovative approach of providing supercomputing power as a service, leveraging its Cray technology, but points out the announcement precedes the actual general availability (GA) by about six months. He suggests that HPE is playing catch-up in the LLM space but acknowledges they are coming at it from a novel angle of high performance computing.
  • Thurai agrees with Strechay’s assessment but adds some optimism, suggesting that the proposed model could be compelling for large workloads. He finds the idea of users being able to pass their biggest workloads to HPE without needing to fine-tune anything to be compelling, particularly for high-performance computing (HPC) tasks.
  • However, Thurai also has concerns. He calls out the absence of concrete details about key aspects such as machine learning operations (MLOps) in HPE’s announcement, and he emphasizes the need to see these before forming a solid opinion on the viability of HPE’s strategy.
  • Strechay also points out that this offering should be thought of more as PaaS vs. IaaS.

Bottom Line:

The analysts are cautiously optimistic about HPE’s announced strategy, noting that it could potentially revolutionize how large workloads and high-performance computing tasks are handled. However, both agree that the company needs to provide more specifics about its execution plan, particularly around MLOps, before any substantial conclusions can be drawn. Ultimately, it’s a matter of execution.

Narrow Workload Scope to Sharpen Focus 

The conversation between Strechay and Thurai further delves into the specific workloads HPE plans to address with its new LLM-as-a-service offering, including climate modeling, bio-life sciences, healthcare, and potentially financial modeling. The analysts also discuss HPE’s partnership with a lesser-known company, Aleph Alpha.

The following key points are noteworthy:

  • Strechay identifies the three major sectors HPE plans to target – climate, bio-life sciences, and healthcare. They suggest these sectors are those that the Cray supercomputing infrastructure excels in, and HPE’s approach to making this infrastructure available as-a-service can simplify the process for users.
  • Strechay highlights the platform-as-a-service (PaaS) nature of the offering, emphasizing that users can either leverage HPE’s models or import their own, like those from Anthropic. This PaaS nature means users won’t have to go through the rigors of setting up their own infrastructure.
  • Thurai discusses HPE’s partnership with Aleph Alpha, noting that the ultimate goal for HPE is to demonstrate its capability to handle the training of large language models (LLMs), which have become as demanding as traditional high-performance computing (HPC) tasks.
  • Thurai notes that he appreciates HPE’s demonstration but voices concerns about the lack of details on how HPE plans to handle diverse AI, ML, and deep learning workloads. He also notes that while HPE has affinity toward open-source, the broader ecosystem of components required for a robust AI/ML service is still unclear.

Bottom Line:

HPE’s new strategy offers promise in making supercomputing as-a-service a reality for significant sectors like climate, healthcare, and bio-life sciences. Their partnership with Aleph Alpha, though not with a mainstream company, signals a move towards demonstrating prowess in handling large AI workloads. While the direction seems promising, we still raise concerns about the absence of details around handling diverse AI, ML, and deep learning workloads and the overall ecosystem approach.

I think in Europe, sustainability will significantly help them. I don’t think it’s as big an advantage in North America… -Rob Strechay, CUBE Analyst

[Watch this clip of the analysts’ discussion, unpacking HPE’s AI Cloud and the prospects for success].

High Performance Computing Meets AI

HPE’s fundamental belief is that the worlds of high performance computing and AI are colliding in a way that confers competitive advantage to HPE. Indeed, HPE has a leadership position in high performance computing as shown below. 

HPE is #1 and #3 in terms of the world’s top five supercomputers with its Frontier and Lumi systems. Both leveraging HPE’s Slingshot interconnect, which it believes is a critical differentiator. 

It also believes that generative AI’s unique workload characteristics favor HPE’s supercomputing expertise. Here’s how HPE’s Chief Technology Officer for AI, Dr. Dr. Eng Lim Goh describes the difference between traditional cloud workloads and gen. 

Dr. Eng Lim Goh, describes the difference between traditional cloud workloads and gen AI. Let’s play the clip and come back and talk about it. 

The traditional cloud service model is where you have many, many workloads running on many computer servers. But with a large language model, you have one workload running on many computer servers. And therefore, the scalability part is very different. This is where we bring in our supercomputing knowledge that we have for decades to be able to deal with this one big workload on many computer servers. -HPE’s Dr. Eng Lim Goh

[Watch and listen to Dr. Goh’s explanation of the difference between typical cloud workloads and gen AI].

Here’s a summary of the analysts’ discussion:

Strechay and Thurai dive deeper into HPE’s legacy and potential within the large language model market, while analyzing the challenges they may face. Strechay draws on the company’s rich history in handling large applications, suggesting that this experience could give them a certain advantage. Thurai, however, seems skeptical about the company’s ability to leverage these resources and align them with the market’s needs.

The following additional points are noteworthy:

  • Strechay acknowledges HPE’s history and pedigree in managing large applications across many servers, drawing upon their involvement in the Open Grid Forum and Global Grid Forum. He sees HPE’s long-standing relationships and experiences with significant entities like NASA and DOE as a potential competitive advantage.
  • Thurai questions the mainstream appeal of HPE’s service, suggesting that it’s more niche-oriented. He disagrees with the suggestion that HPE is the only one offering HPC services, mentioning Amazon’s HPC service as an example of a robust competitor.
  • Thurai raises concerns about the amount of data accessible to HPE relative to the public cloud, pointing out that much of the innovation and AI workloads will go to the HyperCloud due to data accessibility. He acknowledges HPE’s powerful supercomputer and storage capacity but questions whether their promise to handle the most extensive workloads will be sufficient to move the needle in their favor.

Bottom Line:

HPE’s vast experience and pedigree in managing extensive applications and long-standing relationships might provide them an advantage in the large language model market. However, potential challenges in data access for innovation workloads and the competitiveness of the market may pose hurdles to HPE’s success. While the company boasts a powerful supercomputer and storage capacity, its ability to turn these assets into a compelling offering that outperforms rivals remains uncertain.

[Listen to the analysts discuss HPE’s high performance computing heritage and the degree to which it confers advantage].

Follow the Money…Breaking Down HPE’s Business Segments

Above, we take a look at HPE’s lines of business and how its AI and HPC line of business perform. Remember HPE purchased Cray in 2019 and Silicon Graphics – SGI a few years before that to get into the HPC space. 

Looking at HPE’s most recent quarter you can see how it reports its business segments. HPC and AI is a multibillion dollar business – and it’s growing – but it essentially is a breakeven business. So it brings bragging rights but not profits. Intelligent Edge – AKA Aruba is the shining star right now with a $5+B run rate and 27% operating profit – so margin wise it’s their best business and throws off nearly as much profit as HPE’s server business. 

Here’s how HPE CEO Antonio Neri describes HPE’s advantage:

If you think about how public clouds are being architected, it’s a traditional network architecture at massive scale, with leaf and spine where generic or general purpose workloads of sorts use that architecture to run workloads and connect to the data. When you go to this [LLM] architecture, which is an AI native architecture, the network is completely different. You mentioned Slingshot…That network runs and operates totally different. Obviously, you need the network interface cards that connect with each GPU or CPU. And also, a bunch of accelerators that come with it. And there is silicon programmability with the contention software management. And that’s what Slingshot is all about, and it takes many, many years to develop. But if you look at public clouds today, generally speaking, they have not developed a network. They have been using companies like Arista, Cisco, or Juniper, and the like. We have that proprietary network. And so does, Nvidia, by the way. But ours actually opens up multiple ecosystems and we can support any of them. So, it will take a lot of time and effort [for clouds to catch up]. And then, also remember, you’re now dealing with a whole different compute stack, which is direct liquid cooling, and that requires a whole different set of understanding. -Antonio Neri, HPE CEO

[Listen to this clip from Antonio Neri talking about HPE’s unique IP in this space relative to the public cloud]. 

There’s a lot to unpack in terms of what Antonio stated, including the network, Slingshot interconnect, the data services ecosystem and liquid cooling. We asked the question: “is this a flip on ‘Jassy’s Law’ – i.e. there’s no compression algorithm for experience? Or does HPE have blind spots?” 

The following points summarize the analysts’ take:

  • Thurai suggests that while HPE may seem to be highly involved in AI, most of their work is still in classic HPC workloads. Their focus on AI workloads, including LLM, appears more as a demonstration rather than actual work with these types of systems. Thurai expresses skepticism about HPE’s ability to convince users to run LLM workloads on their servers due to a lack of an ecosystem and an MLOps presence.
  • Thurai further notes that to train models, partnerships are necessary, such as with a company like Hugging Face, something HPE did not put forth. Contrasting this with AWS’ approach, HPE’s strategy seems to be focused on rebranding classic HPC workloads as AI workloads, a move whose success reasonable folks will question.

Bottom Line:

The key issue is whether HPE’s strategy of focusing on classic HPC workloads can be profitable. While HPE’s network and interconnect give the company potential advantages, these may be short-lived, as commercial components are available off the shelf. Expertise with liquid cooling in data centers is nice but the real questions will come down to HPE’s ability to attract customer data to its platform versus those of competitors.

[Listen to the analysts riffing on the comments put forth by Antonio Neri].

How IT Decision Makers are Thinking About using Gen AI & LLMs

The next question we want to explore is does HPE’s service have the potential to go mainstream or is it destined to niche status?

Above we show some ETR data asking organizations how they’re pursuing generative AI and LLMs and what use cases they’re evaluating or actively deploying in production. Note that 34% of the organizations say they’re not evaluating, which is surprisingly high in our view. But for those moving forward, the use cases are what you’d expect, ChatBots, Code Generation, writing marketing copy, summarizing text, etc. 

HPE has a different point of view. They’re focusing on very specific domains where companies have their own proprietary data and want to train that data but don’t want to incur the expense of acquiring and managing their own supercomputing infrastructure. At the same time, HPE believes because it has unique IP, it can be more reliable and cost effective than the public cloud players. While still offering the advantages of a public cloud. 

We asked the analysts, is HPE on to something here in that these mainstream use cases are not where the money is for HPE…And is there gold in the hills with HPE’s strategy?

While generally we’re taking a wait and see / “show me” approach with HPE’s LLM strategy, the following points are notable:

  • We do believe HPE can find a profitable niche by providing supercomputing capabilities to those who don’t have the means or resources to invest heavily in this area. They do not necessarily need to compete directly with major players like Amazon.
  • That said, Thurai distinguishes between innovation workloads and mature workloads in the AI domain. For innovation workloads, priorities revolve around experimentation and speed. Sustainability, carbon footprint, and cost-efficiency are important but will be overlooked in his view. Priorities will shift when the AI model matures, and issues like security, governance, ethics, explainability, sustainability, and liability become more important.
  • HPE is positioning itself as the go-to solution for these mature workloads, taking on the complex tasks associated with them. If they can communicate this effectively, and the market aligns with their strategy, they could see success, and sustainability could become a significant factor. Furthermore, this strategy does differentiate HPE and comes at the problem from its position of HPC strength.

Bottom Line:

HPE’s strategy caters to a specific sector of the AI market – those dealing with HPC workloads. This niche could offer profitable opportunities, given the specialized needs and complexities involved. However, their success relies on effectively communicating their value proposition and the alignment of market trends.

[Watch this clip of the analysts discussing HPE’s strategy which caters to a more niche sector of the AI market].

“AWS: Everything’s Going to the Cloud…HPE: Uh – no it’s not”

At Discover on the main stage we heard two distinct points of view:

Matt Wood of AWS was on the main stage with Antonio Niri, and much to our surprise, Matt Wood said something to the effect of ‘over time, we still believe most of the workloads are going to go to the public cloud.’ He actually said that in front of HPE’s  audience.

Then, Antonio basically countered that with (and we’re paraphrasing with tongue in cheek), ‘The world’s hybrid, dude. And it’s going to stay that way.’

Remember that scene in “Bridesmaids?”, where the two bridesmaids are duel singing for attention of the bride? Well we heard this divergent theme with respect to LLMs this week. HPE put forth the notion that supercomputing workloads are different from cloud workloads and HPE has the expertise to make it happen more reliably, sustainable and effectively. Then on Bloomberg, we heard Adam Selipsky put fort the premise that LLMs are fully dependent on the public cloud and its massive compute capabilities.

At the end, in the movie the “Bridesmaids”, the two rivals became good friends (i.e. perhaps there’s room for both points of view). While we believe the market in the public cloud for LLMs will be meaningfully larger, we don’t currently have a good enough sense of the delta to put a figure on it. 

Here’s a summary of the analyst conversation:

  • HPE needs to capitalize on its strengths, one of which is supercomputing. They may not directly compete with the public cloud, but they could carve out a unique space in the market.
  • The next six months, leading up to GA of HPE’s LLM cloud, will not be a singularly definitive period for HPE’s success in this space. Instead, we believe the process will have a “long tail,” indicating that the full impact and success of HPE’s strategy will unfold over a more extended period.
  • Once models are trained and ready for production, considerations sustainability that (scope 1, 2, 3 emissions) and other factors become more important. The cloud could serve as a good “playground” for initial development and experimentation and models brought into HPE’s environment for production work.

Bottom Line:

We believe HPE’s focus of leveraging its strengths, particularly in supercomputing is sound. However, achieving success in its chosen AI market niche will likely be a long-term process and will require greater recognition from customers that HPE is a player in AI. To do so the company will have to leverage its distribution channel to attract key partners that are known for their AI prowess.

[Watch this clip of the analysts discussing the divergent points of view between how AWS and HPE see the world].

HPE Must Cultivate AI Mindshare Through Key Partnerships

Despite extensive use of AI in its portfolio of offerings, HPE is not known as a player in AI. Let’s take a look at where the ETR data shows which firms are getting the share of wallet in the ML/AI space. Importantly, HPE has an opportunity to partner and accelerate its mindshare among tech decision makers.

The chart above shows Net Score or spending momentum on the vertical axis and the pervasiveness or presence in the ETR data set for ML/AI players. Right off the bat, focus on the big 3 public cloud players, Microsoft, AWS and Google– they dominate the conversation. They are pervasive and all show above the magic 40% red dotted line, an indicator of strong momentum. 

Databricks also stands out as clearly a player in the mix. 

OpenAI is also notable. We got a peek at the July ETR data and it won’t surprise you that OpenAI is setting new records. Beyond even where we saw Snowflake at its peak Net Score. And OpenAI as you’ll see next month in the ETR data has gone mainstream, even in core IT shops. 

It’s no surprise that you don’t see HPE in this mix but over time if the company’s aspirations are to come true, like Oracle and IBM you would want to see HPE on this chart. 

Here are the key points from the analyst discussion:

  • Many companies in the AI space are focusing on training large language models, retraining existing models, and fine-tuning models.
  • HPE, however, has taken a different approach. They are positioning themselves to handle the most substantial and complex models, offering their clients the ability to leverage HPE’s strengths in computing, networking, and storage.
  • Questions remain as to whether HPE will succeed with this strategy. It will take at least another year to gauge the effectiveness of HPE’s approach and determine customer reaction.

Bottom Line:

HPE’s strategy in the AI market differs from many competitors by focusing on handling the largest and most complex models, leveraging their high performance computing, networking, and storage strengths. The effectiveness of this approach remains to be seen and will likely take another year or so to evaluate.

One indicator to watch is how integrated HPE’s solution really is in the GreenLake console. Is it a separate console? Is it really on top of the Aruba Central platform or is it a separate installation?  

[Watch this clip of the analysts discussing the AI playing field and what HPE has to do to hit the radar].

Factors to Watch 

We close by discussing the competitive advantages and challenges HPE faces with some critical areas we’re watching.

Here’s a summary of our wrap up:

  • We believe HPE’s competitive advantage may lie in the infrastructure software within its Cray-based technology, rather than just large language models (LLMs).
  • HPE’s AI ecosystem is currently weak, lacking model repositories, model sharing, and a robust software stack. HPE would likely need to partner with model producers to enhance this.
  • A potential advantage for HPE could be the “bring your own model” approach, providing clients a hassle-free environment for training AI models with comprehensive support.
  • Another potential advantage could be HPE’s ability to handle deployment and inferencing, which is a significant issue in AI. This could be a big opportunity for HPE, especially with smaller AI models and edge computing.
  • HPE’s focus on sustainability could be a differentiator in the future. However, they face significant hurdles such as convincing customers to move their data into an HPE environment.

Generally, we were happy that HPE avoided discussions around Quantum computing at HPE Discover. While this may surprise some it may make sense given that Quantum is not yet ready for real-world applications.

Bottom Line:

HPE’s competitive edge in the AI and HPC market may lie in its infrastructure software and approach to handling large and complex models. They also have potential advantages in the deployment and inferencing aspects of AI and could stand to benefit from a future focus on sustainability. However, significant hurdles, including the need to strengthen their AI ecosystem and convincing customers to move their data, remain.

On balance, we give high marks to HPE for including LLM-as-a-Service inside of GreenLake. In addition, HPE under CEO Neri has a clear path of differentiation, which over time should pay dividends. HPE’s AI cloud offering will not be available for six months and it’s unclear how truly integrated it will be, so that is something we’ll be watching as an indicator of maturity. As well, it’s one thing to label the HPC business as AI but another thing entirely to generate profitability from the initiative.

That will be the ultimate arbiter of success.

[Watch this clip of the analysts discussing the keys to watch in the future for HPE’s LLM play].

Keep in Touch

Many thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight who help us keep our community informed and get the word out. And to Rob Hof, our EiC at SiliconANGLE.

Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com | DM @dvellante on Twitter | Comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.

Watch the full video analysis:

Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai.

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

Keep in Touch

Thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight who help us keep our community informed and get the word out. And to Rob Hof, our EiC at SiliconANGLE.

Remember we publish each week on theCUBE Research and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com | DM @dvellante on Twitter | Comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.

Note: ETR is a separate company from theCUBE Research and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai or research@siliconangle.com.

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of theCUBE Research. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content