March 20, 2025

KT Business

The Business Servicess On for You

The Flip Side podcast – Episode 69

The Flip Side podcast – Episode 69

Brad Rogoff: Hello, and welcome to The Flip Side. I’m Brad Rogoff, Head of Research here at Barclays. And I’m joined today by Ross Sandler, who covers the consumer internet landscape for our Equity Research team out of Silicon Valley. Ross, I bet you were expecting me to give you a call about the January episode of The Flip Side, not necessarily the March one, but I felt like a more in-depth debate as we got a little bit further away from the hysteria of DeepSeek in January was maybe better than just a hot take.

Ross Sandler: Great. Yeah, it’s great to be here, Brad. Well, you know what they say about AI, every month is about a year in regular technology terms. So, we’ve come a long way since January, but I’m very happy to be here.

Brad Rogoff: So, we have even more to talk about. And let’s do it. Let’s get down to business. So we’re here to talk about the big US tech companies that have been at the vanguard of the AI arms race in recent years, and the giant cloud computing providers such as Microsoft, Amazon, Alphabet, and even to some degree, Meta. So for much of the past few years, as the world got more and more excited about AI, they all announced ever larger CapEx budgets, spending hundreds of billions of dollars on data centers big and powerful enough to keep them out front, and offering the massive computing resources that are required for AI.

So, as I alluded to earlier, that all seemed to be going great until the beginning of this year in January specifically. And then along came DeepSeek, the Chinese AI chatbot, which suddenly suggested that you didn’t need actually to have all that raw computing power and storage and expensive architecture to get really good results from AI.

Ross Sandler: Yeah, totally. DeepSeek was a huge shift. Some are calling it the Black Monday for AI. If you remember back then, a bunch of these AI hardware stocks took a huge hit on that epic day in January. The one-day loss for Nvidia, I think, was the largest on record. Huge moves just across the board. So like the semiconductor names relative to telecom names moved the most since the dotcom bust from back in 2000. And you know, we haven’t really recovered since.

Brad Rogoff: Right. But away from Nvidia and kind of the architecture around Nvidia, the selloff was actually quite a bit more mild. I mean, Meta’s still up a bit on the year. Microsoft’s forward P/E still pretty robust at 30 times. Seems to me that there’s plenty of positive sentiment around those stocks when it comes to AI. Even if as we see in markets as a whole, there’s not much positive sentiment in recent weeks. When I look at those stocks, is it fair to say that there’s too much hype in the hyperscalers?

Ross Sandler: Yeah, that’s a great line, Brad. I might use that one.

Brad Rogoff: Well, yes, I absolutely am joking a little bit there, but. But on a more serious note, DeepSeek was a big deal, at least in my opinion. May not be showing up fully in valuations yet. I have to think it challenges the basic narrative that’s really underpinned all of these stocks. It’s shown that you don’t necessarily need to have billions and billions of the most expensive, most powerful chips to run really complex calculations.

Ross Sandler: Yeah, it definitely felt like a big moment. And it raised a bunch of questions as to what the big labs out here in the West are doing. I’d say stepping back, it’s important to kind of remember that DeepSeek didn’t really invent anything new in AI. They just engineered everything much better than what was done before. So the analogy I like to use here is that of Tesla versus BYD. So as we all know, Tesla more or less invented the consumer EV market. And BYD has come along and copied a bunch of elements of Tesla’s design. But what they’ve done, more importantly to gain huge market share is they architected factories and processes that are just unmatched. And they’ve taken a huge amount of cost out of the production process.

So that’s kind of what DeepSeek did here. It basically leveraged open-source AI that was already out there. They totally nailed the compute stack. What’s happened is that they’ve been able to build models much more efficiently than what we’ve seen from anybody else, even if you kind of give them the proper kind of accounting around their training costs, which might have been a bit understated. So really stepping back, it really begs the question of what the big US AI labs are doing. They clearly could be more focused on efficiency.

Brad Rogoff: So I like your Tesla analogy, but if I stick with your Tesla analogy, that’s a name that seems to be coming under a bit of pressure here. And while there’s clearly other factors at play to at least some of that pressure is because of fundamentals and sales. And that’s all related to your analogy.

Ross Sandler: Yeah, that’s a whole another can of worms that is a little bit outside of my swim lane, so I’ll leave that to you guys. But yeah, just getting back to AI, the question we get here with the DeepSeek moment is are we in a bubble? And did that bubble just burst? And I think, look, it’s somewhat nuanced. A bubble can be related to overhyping things, which is certainly the case for AI. Or it could simply be a mismatch of the timing between revenue generation and CapEx in this case.

So what we’ve been trying to do is not really answer the bubble or no bubble question, but really, what’s the right level of AI CapEx that these hyperscalers should be underwriting? What’s the appropriate level today versus the magnitude of the overbuild? So some are calling it a bubble. Some just think this is the typical building ahead of demand. I think it’s probably a little bit of both. And yeah, DeepSeek. You know, what that shows is we may have overdone things a little bit in terms of the total CapEx.

Brad Rogoff: So I want to go back to something that you were kind of alluding to a little bit earlier, which is the point around open source. I don’t think that’s something we should be glossing over here. OpenAI and others see the workings of their models as proprietary, but DeepSeek just put it all out there. Spilled all the beans, as you put in that report in January that you did with our China Tech research team. So that makes it attractive in theory for developers to use and build on and potentially to keep finding shortcuts.

Ross Sandler: Yes, that plus the fact that it came out of China in the context of the political escalation and the ongoing trade war between the US and China, the tariffs, et cetera. It just meant that this whole thing got a lot of eyeballs and a lot of hype beyond what would have happened if it came from somewhere else in the world. The DeepSeek story was seen as a ‘necessity is the mother of invention’ story where you know where that company lacked the state-of-the-art semiconductors it made up for in algorithmic efficiencies and software engineering. But the reality is, these guys at DeepSeek are just really good. And China is really good at AI. They’re really good at software development, even if they use some of our open-source technology.

Brad Rogoff: So you said China’s good at AI. They’re good at software, right? But what they’re not as good at, as you were alluding to, is producing the latest generation of chips. So if it’s possible, though for this to work, it means budget AI, I guess, can be a reality, which has to change the atmosphere. Once again, going back to the hyperscalers. So all those huge CapEx plans, I would think they come under greater scrutiny. And if a Chinese AI lab can claim to have built a reasoning model with similar capabilities to Google and OpenAI’s products at a fraction of the price, even though exactly how much is debatable, and they have no access to the Nvidia cutting edge chips, then the landscape has to have changed. So what’s the use in investing heavily to be an industry leader if fast followers like DeepSeek can compete with this discount offering?

Ross Sandler: Yeah, you’re starting to sound like my colleagues in research, Brendan Lynch and Tim Long, who cover the data center space and the IT hardware and communication infrastructure space for us. So they’ve long argued that the entire data center industry is kind of overbuilding and exposed to a risk of a slowdown if we kind of overshoot and we have excess supply as a result. And what’s interesting is in typical kind of bubble formation, shareholders have become accustomed to more CapEx is better, more data centers are better, whereby the bigger you go, the more your stock prices are going to go up. And that encourages more of that same behavior, bigger CapEx plans and so on. So that’s been the setup. It’s obviously somewhat precarious. And you’re right that DeepSeek really exposed that perhaps for the first time.

Brad Rogoff: I’m glad to hear I’m not alone, especially since those guys are experts on the topic. Getting back to your hyperscalers, though.

Ross Sandler: Yeah, totally. So, covering the hyperscalers, there’s still plenty of catalysts ahead. One of the things that we’re seeing right now in AI is this big shift away from traditional large language models, or what we call dense models towards newer reasoning models, or what we call sparse models. And this is a big shift if you kind of think about it from the user interaction. These AI systems historically have just been more or less text completion engines. So, something that could write an email for you, it could summarize this podcast for you. And we’re headed in a direction that’s a little bit different, whereby the AI system is able to complete tasks autonomously and check their answers before coming back to the user, and essentially do more without the human in the loop.

And this trend also means a huge shift away from more of the general purpose training chips towards these custom inference chips. And if you kind of look at where the compute is spent in the process of building an AI system, training involves a huge amount of that compute, and it involves a huge amount of chips. So, the broad shift to inference, if this continues to play out, it probably means all else equal that the compute required and the power required compared to previous forecasts is going to come down, but by our estimates, we still need a lot more compute. We’re talking about a gap between now and 2028 of around 250 billion exaflops.

Brad Rogoff: You’ll have to help me out here, Ross, on that one. I’ve read a lot about AI. I was sitting on the beach on vacation last month reading about semiconductors and chips in general, and but exaflops, that’s one that escapes me.

Ross Sandler: Yeah. So exaflops, for what it’s worth, is a billion billion calculations per second being run on one of these next generation semiconductors. So, if you kind of step back a bit to put it more kind of simply, consensus estimates currently call for a little over $300 billion in total CapEx for these hyperscalers, about $100 billion of which is going into the AI semiconductor portion of that CapEx. So, it’s about a third of total. And what we’re saying is that’s probably not enough to satisfy the overall compute required, given what’s going on with reasoning models and overall AI adoption. So, we think we might need to quadruple the inference compute required between 2025 and 2028. And at that point, inference is going to represent a little over half of total industry compute.

Brad Rogoff: But why would it have to grow that fast? If we consider the use cases that haven’t fully evolved yet, and we think about what people are actually doing now, which you alluded to earlier, I mean, if really what most people are using this for today is, I mean, we’ll use your summary of the podcast, even though I’d really like everyone to listen all the way through, if people are doing that, how do we need this much growth?

Ross Sandler: Yeah. So the adoption cycle is really kicking into high gear right now. So ChatGPT, who’s the leader, for example, just announced earlier this year that they crossed 400 million users. And what’s interesting is they added that most recent 100 million cohort of users in about two months. And so, that’s about a 33% increase in usage, but for context, it took them about a year to get their first 100 million users. This last 100 million that they added came in about two months. So we’re seeing adoption ramp up, we’re seeing things accelerate, and it’s really on the back of these new reasoning models.

So OpenAI released o1 and o3, which are their two reasoning models that have much more capability. And they’ve also integrated recently with iOS, so with Apple. And so, that’s really broadening out the adoption for both consumers and enterprise. We’re also seeing new products. So agents are all the hype right now and are just cropping up left and right. There’s a new one actually that just came out of China called Manus that’s getting a lot of attention. Some are calling this the second DeepSeek moment in AI.

Brad Rogoff: I’m going to keep being the skeptic here, though, for a minute. Right. So, some of those added users, I mean, they’re big numbers. I get it, right, but aren’t they probably just giving it a try? And let’s get back to my hype comments. I mean, everyone wants to see what all the hype is about. What I’m not convinced of is that they’ll become power users, power users that need more power, right, and those investments on compute.

Ross Sandler: Yeah, that’s probably right. As the systems get more capable and more powerful and can complete much more tasks and kind of solve everyday pain points for both consumers and those of us at work, you’re going to see these trends really continue to take off. So, the analogy I like to draw here is it’s kind of like what we saw in the early days of mobile. The first applications that were coming out in like 2010, 2011 were things like Tetris games or Words with Friends. It took a few years beyond that for native mobile apps to come out. These would be examples of this would be things like Uber or Instagram that really use the technology in a new way. So, they use the phone’s location, they use the camera, they use the map. And once this happened, the adoption curve really took off. And these became high retention kind of everyday use cases. So today, you can’t really think of a world without some of these mobile applications. The same thing is about to happen in AI, where the capability gets much better.

Brad Rogoff: All right. Well, you did a good job putting that in terms we can all understand when you start talking about Tetris and Words with Friends, that’s definitely more common to most of us than exaflops. So the next question I think I would have to ask you then is what can you imagine being the applications of AI that will get to the end of, say, this decade and we cannot believe we ever lived without them?

Ross Sandler: Yeah. So that’s what we call the rise of agentic AI. That’s what this is all about. So these are autonomous systems that will perform a specific task without really any human intervention and do it end to end really well. So one example of this is OpenAI just launched in beta, a product called Operator. And so, this is their new next gen AI agent. And the service can handle a range of everyday tasks such as ordering groceries, booking travel again writing emails, summarizing podcasts and much, much more. And so, the agent products are in beta right now. We expect adoption to pick up throughout the year and capabilities to get much, much better. And the important thing is these agentic products, they work hand in hand with these new reasoning models, and they can broaden things out and solve a lot more use cases.

Brad Rogoff: All right. Hold on a minute. I think I’m going to go now to the extreme in the other direction really, because if these high-end reasoning models are so powerful and so impressive, they work this well. I mean, surely the sky is the limit and wouldn’t they be used for all queries at some point? And getting back to kind of talking about humans and interaction with this, wouldn’t these agentic AI models start to pose a threat to the livelihood of many humans, or at least those humans employed in repetitive clerical tasks?

Ross Sandler: Absolutely. Yeah. This is likely what’s going to happen, but we’re not going to see complete replacement of humans. I think this will supplement what humans are doing with AI agents. So you can see companies like Salesforce, Sierra, ServiceNow offering the most basic agents for enterprise customers. And there’s even like a next crop of companies coming up that are startups. So, Y Combinator, the famous startup incubator, has funded a bunch of AI agent companies focused on accounting, on financial research, on medical billing, on customer support. And just think about almost any area within the enterprise. There are a few startups kind of attacking that use case. But for the most part, I think rather than replacing human jobs, what we’re probably going to see is reasoning models serve more as a supplement and a productivity enhancer, with humans still in the loop kind of directing the agents on what to do.

Brad Rogoff: Well, putting my macro hat on, productivity typically a good thing for growth, but it certainly will be an interesting balancing act for the economy when it comes to some of those efficiencies leading to job loss. Now, I’ve been a little bit more bearish than you throughout this clearly, although you piqued my interest a little bit here with some of this agentic AI. So, let me ask you, what is the upside case for hyperscalers? Can you really walk me through maybe a near-term bull case?

Ross Sandler: Yeah. So, we get this question a lot, and I tend to answer it this way. The thesis generally comes down to whether chinchilla scaling continues. So, chinchilla scaling laws are this theory that came out of Google many years ago. And it’s a basic rule that says the more data and the more compute you use for each generation of AI system development, the better the performance and the overall quality of the AI system will be.

Brad Rogoff: You explain that quite well, Ross. But then at another point in time, not on this podcast, maybe you’ll actually have to explain to me what that has to do with chinchillas. Keep going though.

Ross Sandler: Yeah. So, if chinchilla scaling continues and the training and development of AI systems kind of remains on the same path that we’ve witnessed over the last couple of years, then the trade continues to be the best performers in the market are going to be the components that go into these AI systems. So, this would be semiconductors, networking equipment, data centers and so on, and probably less so upside for the hyperscalers. Because what if chinchilla scaling continues, you’re going to see CapEx continue to go up at a massive clip. You’re going to see the hyperscalers continue this arms race, which requires much more CapEx. And that starts to weigh on free cash flow, even if it’s ultimately good for them in the long term.

On the flip side, if chinchilla scaling starts to stall out and you start to see everybody shift towards reasoning models which cost much less to train while performing better, then we do see a lot of upside for the hyperscalers, because what this means is the CapEx required in the future state would come down and free cash flow would start to turn back up meaningfully. And as we move from the model stage to the product adoption stage, then application stocks should start to outperform depending on who has the usage. So, this would be software companies that have agents and lots of adoption. So yes, innovations like DeepSeek might mean that the cost curve is bending downwards. We still see inference has a huge amount of upside from here. So, the bottom line is there’s a huge amount of growth in front of us. AI is just kind of getting going. The story is definitely not over, and we’ll just have to see how things play out over the next couple of years.

Brad Rogoff: Definitely not over. Thank you, Ross, for joining us today. I also look forward to checking out all the Google search trends for chinchillas after people listen to this podcast. Clients can keep up with all our AI related coverage and on DeepSeek effects in particular in the theme hub we’ve created on Barclays Live.

link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.