WOYM: Nvidia, Sovereign AI, Biotech, Google Criticism

Guests:
Ram Ahluwalia & Justin Guilder
Date:
02/23/24

Thank you for Listening this Episode!

Support our podcast by spreading the word to new listeners. We deeply appreciate your support!

Episode Description

Justin and Ram cycle thru reflections from the week prior and the week ahead

Episode Transcript

Speaker 1 [00:00:01] Hey, Justin. Good afternoon.

 

Speaker 2 [00:00:03] Hey, Rob. Happy Friday.

 

Speaker 1 [00:00:05] Happy Friday. Hey, Ben.

 

Speaker 2 [00:00:08] I've been. I've been battling a little bit of a virus, so I'm finally feeling better. It's forced me to do what you choose voluntarily to do, which is abstain from eating food for a few months.

 

Speaker 1 [00:00:20] Oh, wow. That's cool. You're not. You don't have an appetite? Probably.

 

Speaker 2 [00:00:24] Yeah, I didn't. I finally had sort of an appetite last night. Today, but not a normal one. Butfor three, four days.

 

Speaker 1 [00:00:33] It's such it's such an easy way to get started fasting when you're sick. They just don't want to eat.They just roll with it. Maybe have some chicken noodle soup and then. Yeah, yeah. The better I started.

 

Speaker 2 [00:00:44] I got fans. I'm stillnot a fan of fasting, though. On a personal level.

 

Speaker 1 [00:00:49] Yeah. Look, many people say the same about, like, exercise and getting through early, you know?

 

Speaker 2 [00:00:55] Exactly.

 

Speaker 1 [00:00:57] You know, but it's all college. It's all psychology, right? I wasn't a fan for 41 years. I'm still nota fan. I'm not a fan of fasting.

 

Speaker 2 [00:01:06] Yeah.

 

Speaker 1 [00:01:08] You know, actually. Right.

 

Speaker 2 [00:01:10] Exactly. Yeah, but I did it, I guess. So it was interesting.

 

Speaker 1 [00:01:16] So let's get into it. We got a lot of topics to get through here.

 

Speaker 2 [00:01:19] We do every week and we did not chat last week.

 

Speaker 1 [00:01:23] Did not chat last week. So where do you want to start?

 

Speaker 2 [00:01:27] Well, I would say given what we've been talking about for a really long time. Nvidia is a really great place to start. We've been all over semiconductors and the rise of AI. I still don't know if any, and you've had a pulse on the market. Could have anticipated that we gained more in a single day than any other stock ever has in the history of the stock market. A lot of order of $1 billion right.

 

Speaker 1 [00:02:00] Now, records being made, a quarter of.

 

Speaker 2 [00:02:01] A trillion, sorry, aquarter of $1 trillion.

 

Speaker 1 [00:02:05] It Nvidia gained another Nvidia actually if you mark time if you look at like Nvidia where it was in October 2022. At that market cap. Nvidia gained at Nvidia at that pointin time. Also with Nvidia that dramatically different business between now and then. Certainly from a sales perspective, it was right. But and you had a meta which had a record gain when they announced earnings at 20%. Greatest gain that we've ever seen in market video. And, from a market cap gain perspective, yes,from a percent change perspective, meta holds that title. We own both, which is great. Yeah. So, you know, I don't think anyone can really predict how these things go. I saw. An interesting take, which I like around Nvidia. So first off, there's the secular thesis, and we'll get back to in a second. Second is the tactical. So what happened is you have to earnings. You had January Nvidia ramp up 4,045%. That's one. So people are anticipating the growth and maybe here's leaking of the growth. People can read into the supply chain. They anticipated that. And so leading up to the date of earnings and video was pointback like 6%. An interesting take I saw was that in the prior two quarters, Nvidia had essentially gone now here after earnings. Even though they did it beat beat raise. And so the market was pulling forward that expectation. And so people were caught off side. So they had the BP raise. But the pull forward was done. And so it lifted off. That was really interesting tactical perspective from the secular perspective. It's quite interesting. First off, the immediate calls are people stressing about these two ones because are very different than any other earnings calls. And these Nvidia calls start with a survey of the ecosystem not knowing about Nvidia. They say here's what we saw Microsoft do. Here's what we see happening with with countries. And here's what we see the secompanies doing. They're talking about the planet. They're talking about the planet.

 

Speaker 2 [00:04:15] Through the lens of AIor.

 

Speaker 1 [00:04:17] Through the lens of, accelerated compute, which is their framing of AI. They're thinking, well, both compute and others accelerated compute. And these are the second. And it is about the AI is the key thing there too. But it is a really unusual earnings call. Usually, the second sentence is after hey thanks for joining. This is our revenues are X. Our guidance is that you know yeah. It only starts a very high level about the world and hallucinogens and his. His talk is very grandiose. I don't want to say hyperbole like Sam Altman is hyperbole. Sam Holmes. I need $7trillion to go build in a machine in the desert. Jensen actually called him out on you guys.

 

Speaker 2 [00:05:01] Yeah, that's kind of cool.

 

Speaker 1 [00:05:03] I don't think we really need to sentence. Really. Right. So. But yeah, like they be and they, had a they beat by 2 billion in revenue. Not a small number. Can keep doing them. The wild thing is that it may be the case. We'll find out in a few days that the forward P of Nvidia is once again cheaper than Apple, which I find hilarious. So while the earnings is keeping up on last lock step with the price growth right. Yeah, this is the thing people can't get their heads around. You.

 

Speaker 2 [00:05:35] It is hard to fathomthe growth of earnings. It really is. It's pretty incredible. One question Ihave for you. We haven't talked a lot about this, but I've been thinking alittle bit about it. It relative to AI and Nvidia and the future of compute,which is as we put more demands on artificial intelligence. There's been somediscussion that I've seen around moving the city of compute locally andincreasing that. What we looked at that. Are there players that we should belooking at and thinking about that are ahead? Is that 30 something Nvidia isthinking about in terms of like how they're going to integrate with or knowother types of hardware devices?

 

Speaker 1 [00:06:31] Yeah. So there's a few different threads there. So one is you can run an alarm on your laptop or a machine. There's high performance computing machines. Or here's the thing, it's the training of the elemental lambs that are part of a lot of compute. The inference doesn't require a lot of compute unless you want speed and everyone wants speed for convenience. So there's some practical considerations there,right? Like Metta created an in it's call llama and lamb is. A function with aset of weights that take inputs and crank out outputs. So you can copy paste that element and you can run it locally and you can train it what you want todo. So it raises another question what's the value of GGP? These are the foundational outlines when meta has one open sourced and you do not have to run a training costs on it, which cost a lot of money. The hard part is creating the URL and getting those this function right. So that's one. But more direct way to answer your question is yes, that's exactly what Apple's focused on. MyApple saying, gee, how do we run a local version of LM on the phone.

 

Speaker 2 [00:07:42] Right. The other thing that I think is also interesting that I've been thinking about and discussing with some folks is the small language models. So as opposed to large language models that do pretty much everything decently well, although there some hallucinations in the machines.

 

Speaker 1 [00:08:01] Yeah.

 

Speaker 2 [00:08:04] What about the future of small language models that are much more narrowly focused and tighter in terms of their training?

 

Speaker 1 [00:08:15] That's where the opportunities I mean, the large language models, I would argue, have failed to deliver a meaningful use case other than coding. And coding is a great use case. A lot of code is out there. You can go from one engineer to 7X8X engineer over night, but it's a narrow sliver. So the promise of AI hasn't been delivered at the application layer. We're just met with disappointment. You know the comera we had. Email and the browser. Those are two killer apps that are legitimate killer apps that transform every day. How we communicate, send information, browse. Interact. Share our brands. Engage with customers. The chat room was created. Instant messenger was created and obviously there was a bust. And then it came back again with the web two. Right then I we have summary. And we have a generation and a generation of hallucinates. The summeris not bad. You know, for you know it's right. But we don't have that killer use case yet. So on the micro albums, I agree that's the opportunity. And that's what's working, by the way. Right. There are startups that have the micro loans for different, scenarios, from video editing to stardom, which we use to take notes and transcribe. That works. So I agree, I think the microloans have. Greater potential in the near-term. Those vertical eyes specialized. Yeah.

 

Speaker 2 [00:09:47] Exactly. It's interesting to think about the ecosystem that has to develop for these types of applications and, the infrastructure that's necessary for it to continue to grow. Because obviously some of that is technological enter. Yeah. But some of that is also standards. You know, whether those are guardrails or ethics, however you want to think about it. User access controls, you know, if you're putting in an AI system and training it on your company data. Does everybody inthe company have access to that AI unfettered, which may have maybe reading CEOs emails, which may be reading all of the HR files, which may be read ingevery employment contract? All those things I think are going to be really interesting, because you sure as heck know that somebody, that actor outside who are internal, you know, just curious actor is going to say, watch theso-and-so.

 

Speaker 1 [00:10:52] Right. That's why we Google to roll out their transformation of, you know, Gmail and overall workspaces with AI. It takes time to solve all those hard problems. You know, those safety constraints you mentioned? That's what's screwing up a, you know, I was great when it first rolled out, except for was great like in February, March before had all the safety constraints. If you look at the white paper, by the way, I went through the Meta Lama white paper, they've open sourced AI, which is a gift to humanity, by the way. We don't know the Chat GPT framework. We know that for Open AI, for meta, they've shared exactly what data sets the training function safety guardrails open. And they actually show that there's a trade off between helpfulness and safety in the in the AI. And you've seen this with Google. I've asked Google questions around the best way to delve ahead of thiskind of risk with this kind of option. And it's like, sorry, I cannot give. You felt like, come on, dude, what are you doing?

 

Speaker 2 [00:11:51] I've been getting that answer more than I've ever gotten you before. Sorry I can't help you with that request.

 

Speaker 1 [00:11:56] It's I was like, well, I need a joke on Google. And there's a, there's a let's screen or tweet somewhere. I saw where they show the safety prompt that Open AI loads ahead ofthe user's prompt. Now you don't see it. It's just there in the background the same way when you configure your GPT. That's like two pages long of this like text. You probably want to delete most of it, if not all of it, and it just adds that. Burden on the system when it's trying to solve your question. And that is partially in addition to the state, in addition to saying, no, I can help you. It has this other constraint, and that has caused Google a lot of, issues and headache this week.