Brad & Will Made a Tech Pod.

53: The Mass-to-Volume Ratio

Episode Summary

Nvidia bought ARM. Nvidia bought ARM! It's one of the biggest semiconductor deals in history, so we dig into what it all means, from some basics on CPU architectures to the implications for the mobile, enterprise, and machine-learning markets. One of our nerdiest episodes yet! Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod

Episode Notes

Nvidia bought ARM. Nvidia bought ARM! It's one of the biggest semiconductor deals in history, so we dive deep into what it all means, from some basics on CPU architecture to the implications for the mobile, enterprise, and machine-learning markets. One of our nerdiest episodes yet!

Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod

Episode Transcription

s1e53-brad and s1e53-will

Will: [00:00:00] Hey, Brad, I got windows 2004 last night.

Brad: [00:00:04] you're in the future. You're living in 2004. Now

Will: [00:00:06] I haven't installed it yet, but I have a full command line interface.

Brad: [00:00:09] this is like the late night man used to say in the year, 2004.

Will: [00:00:14] So, so I have no idea what that is actually.

Brad: [00:00:18] Wait, what?

Will: [00:00:18] No, I've never heard that one 

Brad: [00:00:19] That's the Andy Richter, the Conan, the previous iteration of Conan O'Brien or was that two iterations ago, the first, the first iteration of Conan O'Brien, which was, I believe late night.

Will: [00:00:30] Late night with Conan O'Brien

Brad: [00:00:32] Late night with Conan O'Brien his first real late night show that was you never see you don't remember that

Will: [00:00:37] Hey look, 

Brad: [00:00:37] You don't remember the, in the year, 2000 bit 

Will: [00:00:39] Look Brad I'm gonna go ahead and tell you something bad.

Brad: [00:00:43] I'm super hoarse right now. Cause. Slightly hung over.

Will: [00:00:48] no, that looks, that was 20 years ago, man.

Brad: [00:00:51] Oh, more than that actually like 22, 23, 24 years ago, 25. That was when I was a high school. So that was like 20, maybe 25 years ago

Will: [00:01:00] that, that might've been the time period. I wasn't watching. Cause if, when you were in high school, I would've been in college and I was just, I, I was doing other things.

Brad: [00:01:07] What is it about college that makes you not watch TV?

Will: [00:01:10] I didn't have a TV for a while

Brad: [00:01:11] Okay that was me too, actually.

Will: [00:01:13] Yeah. And then I got a TV and it was mostly used to play NHL 96. It turns out.

Brad: [00:01:18] Yeah. It's a good collegiate activity.

Will: [00:01:20] Yeah, well, like it's the one time at that time in your life, in my life, at least being able to play multiplayer games was a brand new thing. Like having constant access to multiplayer games was new.

Brad: [00:01:31] having constant access to people around the play then was also, you didn't have to, you didn't have to like drive to somebody's house. They were just around.

Will: [00:01:38] yeah. You can just be like, Hey man, uh, I got the fourth tab. You want to play some NHL and you'd be like, yeah, I'll play some NHL. Um, what.

Brad: [00:01:46] That's what living in the future is.

Will: [00:01:48] So the, the, I gotta say the first thing that windows 2004 did is I have this task schedule or things set up to run a Python script that makes my audio controller work with my interface.

Right. And the first thing windows 2004 did, was break that.

Brad: [00:02:03] Nice. Great. Yup. That's the, that is the, that's the, the dark secret of living these streamlined automation life in windows is that it's just a giant house of cards. And

Will: [00:02:12] Yeah. So task scheduler seems like it. Maybe doesn't like Python scripts now or something, but I think I can just like load the WSL stuff and set up a Chron tab and maybe never think about this again.

Brad: [00:02:22] you spend hours and hours setting up a task scheduler and group policy and a shortcuts of all varieties. And along comes windows update and just goes and it all, it all, it all collapses.

Will: [00:02:39] I have not spent as much time engaging with Task scheduler as you have.

Brad: [00:02:44] It's terrible.

Will: [00:02:44] It's bad. Yeah. I was going to say that's the secret of it. Is it sucks ass?

Brad: [00:02:48] It doesn't work. It doesn't work reliably. It doesn't do it. It doesn't schedule tasks, which is right there in the name.

Will: [00:02:55] oh, it schedules them 

Brad: [00:02:56] It does, but not reliably. It's not always clear what user is running them. So sometimes the thing you want to do, you don't actually have like, privileges to do.

Will: [00:03:03] Well, and, and also, and also sometimes like the one job that task scheduler has is to look at some conditions. And when those conditions are fulfilled to do a thing, sometimes it looks at the conditions, the conditions are fulfilled and it just doesn't do anything.

Brad: [00:03:18] totally. Well, that's what I'm getting at sometimes. And it doesn't tell you why that's the thing. It might, it might have tried to do the thing and was told by the system. You don't have the rights to do this, but you'd never tells you that

Will: [00:03:27] But I can't like I was, I need to find like the command line interface for task scheduler, so I can run a task to see what's happening at the console.

Brad: [00:03:35] it has that. I can. We can talk later or I can just tell you right now what it is actually hold up

Cause I've got, I've got a whole bunch of them set up right now. I've got, do you want to know what this is going to turn into the whole episode? If we keep going like this,

Will: [00:03:47] Do I want to know what love is? Yes. Show me, give me a

Brad: [00:03:50] you want, do you want me to tell you what a command line stuff I have set up that I can just type a word in to make happen rather than navigating?

Will: [00:03:58] This feels strangely kinky, but, okay.

Brad: [00:04:00] Uh, for example, Cause, you know, like most of the stuff in windows, you have to navigate like five layers of menus, right. To find the button you want to click. So like, I have automated things like switching between the light and dark theme and windows.

Will: [00:04:13] Why would you ever switch between the light and dark theme in Windows

Brad: [00:04:16] I, I liked the light theme during the day and the dark theme at night.

Will: [00:04:19] Doesn't it do that automatically.

Brad: [00:04:20] No it does not. That's what we've talked about this man. So

Will: [00:04:25] I thought it did that automatically now.

Brad: [00:04:26] I have shortcuts for, I have shortcuts called light and dark. And I can pop like a pop up the start menu and type light, and it will automatically switch to the light theme. Uh, I have a, I have one called desk swap that we'll swap, which one is my primary desktop.

Um, I've got a variety of them that, uh, mute and unmute various input devices, sound, sound, input devices

Will: [00:04:51] Oh, yeah, that's a neirsoft jam right there right?

Brad: [00:04:54] Yes. Uh, no, I use task scheduler for that, but ah, hold up, let me tell you, I was just gonna tell you, um, S C H tasks dot exe is the command line version of task scheduler. So you can run sch tasks slash run and the name of the task. That's for automating that's

Will: [00:05:14] This is a fantastic, incredibly useful cold open.

Brad: [00:05:16] that's for your automating purposes. We'll talk more later.

Will: [00:05:51] Welcome to Will and Brad made a tech Pot I'm Will

Brad: [00:05:54] I'm Brad, how you doing Will?

Will: [00:05:56] I'm over here making a tech pod? How about you, man?

Brad: [00:05:57] That's right man. It's just here. We are on a Saturday morning feeling, feeling lively and crisp. Ready to talk?

Will: [00:06:03] everything's great. 

Brad: [00:06:05] Nothing bad at all. Whatever. I'm sorry.

Will: [00:06:08] Yeah, no, I'm just re look.

Brad: [00:06:11] They're not fine, but they're fine.

Will: [00:06:13] Yeah, everything's fine. I slept on the couch last night. Not because I have any infractions, I didn't do anything wrong.

Brad: [00:06:19] Okay. You just couldn't be bothered to get up

Will: [00:06:22] No, my daughter has been in, in our room the last couple of nights and we are queen size bed house. So it's three people in a bed is too many people.

Brad: [00:06:32] That's a lot.

Will: [00:06:33] So I, it discretion was the better part of Valor last night.

And I chose to just, uh, you know, couch sleep.

Brad: [00:06:39] She is. She's enough of an actual little person now that she occupies a full half of the bed. Not so much like a. Two people plus plus a tiny, tiny human, like who's kind of

Will: [00:06:49] if she just occupies half of the bed, it's she's do, she's doing better than she ever had. She's a, you know, she's 7. She's a full sprawl sleeper. It's like arms legs in all starfish positions. So

Brad: [00:07:01] punching way above her weight class on the bed. Bed occupancy. Yes.

Will: [00:07:05] Her mass to volume ratio is, is off the charts.

Brad: [00:07:08] Oh boy.

Will: [00:07:09] Um, but, uh, yeah, so we're, we're a little things are we're great.

We're doing great

Brad: [00:07:17] Fun fun. Everything's fine.

Will: [00:07:18] what are we going to talk about today? Well, first off,

Brad: [00:07:22] Yes.

Will: [00:07:22] I want to point out we're still reading Hitchhiker's guide.

Brad: [00:07:24] Uh...... yes

Will: [00:07:27] Brad, you got to get on this.

Brad: [00:07:28] I know. I need to get, so I read, I read both intros, right bed. I read both like the, the, the forward and the introduction.

Will: [00:07:35] the Neil Gaiman one and the Douglas Adams one

Brad: [00:07:37] written by Neil Gaiman, which largely restated what was written in the introduction by Douglas Adams. But that's okay. I found, I will say this, uh, I found the Douglas Adams authored introduction wildly more charming, nothing, nothing against Neil Gaiman, but like there was something there's just a certain lilt to the way that Douglas Adams. Like his phrasing that like, if that's, if that's what I'm in for, I'm excited. That's what I'm trying to say. Or

Will: [00:08:06] we'll get to this when we talk about the book, but the thing that has struck me rereading this is how much of modern discourse is shaped by the weird quirks, especially in our friend circle, by the weird quirks that Douglas, uh, of, of the Douglas Adams, his use of language

Brad: [00:08:21] Okay, great. I'm excited about that. Like, there's just something, something about like that there's a certain idiosyncrasy of like speech patterns that I really go for. It's like in the intro, I'd like it stuck with me. That's how memorable it is that like, when he's talking about being drunk in a field in Austria, when he, when he had the idea for the book. Yes. Like his, the way he described his drunkenness was, uh, if I remember the phrasing, we are speaking of the mild, inability to stand up. It's like, okay, you got the point across. That's very, those are very tidy and economical and slightly charming way to get that idea across

Will: [00:08:58] Yeah. Yeah. Um, so it's been fun rereading those, uh, we're uh, this was our, our, like our challenge for hitting a thousand patrons. We're at, uh, almost 1100 now. So it's

Brad: [00:09:10] Geez. Thanks. Wow. 

Will: [00:09:12] A lot of people signed up this month.

A lot of people are stoked about Hitchhikers guide

Brad: [00:09:14] everybody wants to read Hitchhiker's guide. Thank you ever. That's wow. That's that's flattering. Um, the last thing I'll say about it real quick is I, I, it doesn't look that long. Maybe it's maybe it's the omnibus edition that I've got, but like I thought somebody had said it, it was like, you're just kind of standard 300 page novel, but like in the, in the omnibus it's like 180 pages or something.

Will: [00:09:35] It's it's like, depending on how fast you read. I think My Kindel told me it was going to be five hours

Brad: [00:09:41] okay. Yeah,

Will: [00:09:42] yeah. At the outside. So

Brad: [00:09:44] not a problem. I got, I got plenty of time and

Will: [00:09:46] it's a good Sunday afternoon

Brad: [00:09:47] Yes I'm getting ready to get into it. I am looking forward to it. I know somebody, at least one person I've seen on the discord said they just finished reading it for the first time and we're just in love with it.

Will: [00:09:56] It's it is, it is one of the things that we'll talk that I want to talk about is it is striking to me how precious it is for a thing that is written in the late seventies about the modern technology of today.

Brad: [00:10:09] Uh I don't know if that's a good thing or not.

Will: [00:10:11] Uh, pros and cons. Anyway, uh, today, today we didn't mean to make this Nvidia month, but Nvidia yeah.

Brad: [00:10:18] Like, I, I, we went into this podcast with this topic in mind, then it occurred to me, like, we probably need to say like, this is not sponsored by Nvidia

Will: [00:10:26] Yeah. Uh, no this 

Brad: [00:10:28] we're doing this show purely because this is a very newsworthy topic.

Will: [00:10:32] I mean the new GPUs came out and they were worthy discussing

Brad: [00:10:36] Yes. It's like Nvidia has no chill at the moment. They just keep dominating the news.

Will: [00:10:40] Well, yeah, and we had the game streaming thing last week, which was not entirely an Nvidia thing, but like the, the thing that I like best, and I think is the, maybe not the easiest to use, but the best results is the, is the Nvidia solution with Moonlight.

Brad: [00:10:53] Can I ask real fast not to derail this. Um, how has Moonlight, I don't know if we, I don't know if we covered this, how was Moonlight on the physical steam link? The actual box that Belle sold? Like, is it adequate?

Will: [00:11:04] So that's where I run it. It seems like the hardware in the, so, um,

Brad: [00:11:09] that, that, that hardware cannot do H.265 which is the reason that I ask. So I wonder, so I wonder if h.264 is up to the task.

Will: [00:11:16] It's fine for 1080P I have not tried. You can't do 4k on the steam link. Um, the, the setup is a little fiddly. Like there's not an auto updater for Moonlight, and I'm not sure if that's because Moonlight is still early or if it's that there's some limitation of the steam link that won't let applications update themselves.

Brad: [00:11:36] but, but, but just performance latency wise. It's fine.

Will: [00:11:38] It's great. I played a fair amount of, uh, of, uh, death stranding or, uh, watchdogs two last night

Brad: [00:11:45] Oh, 

Will: [00:11:45] yeah. Using it

Brad: [00:11:46] I might actually, I might actually get that steanlink out of the box and put moonlight on it. See how it is

Will: [00:11:50] if, if, if you haven't, so I use one steam link that is wifi one that is ethernet. They both work fine. The wifi one is a little bit like occasionally you get graphics, artifacts, the wired one. I never see that.

Brad: [00:12:04] Are there kind of your commercial video streaming entertainment apps available for the steam link? Probably

Will: [00:12:11] Not really like you can't get Netflix for it.

Brad: [00:12:13] right. That's what I'm at. Okay. So it could not be like a good Apple TV replacement necessarily.

Will: [00:12:18] I think you're better off with a dedicated piece of hardware for that. That is not the steamlink, the, so the flip side of that is I think probably, uh, I haven't, I just got iOS 14 on our TV, iOS 14 on my Apple TV yesterday. So I haven't tried the new support for controllers, but theoretically, they fixed controller's support with this release of iOS or TVos.

Brad: [00:12:40] wait, was it broken before?

Will: [00:12:42] Well, so it supported all the sticks and buttons, except for the clicky, the sticks on the, on the Apple TV and the iOS devices where the MiFI standard. So they would map to that, which meant there was no clicky sticks, which is fine if you're playing games built for iOS. But if you're trying to play stuff that's built for the PC that expects to have to kick clicky sticks.

You had like situations where there were weird cords where you have to like, press the Xbox button on your Xbox controller. And it was, it was not great.

Okay.

Um, so, but today we're talking about why this arm Nvidia deal and it's, it's an interesting, what we're talking about is cause it's one of the biggest semiconductor deals of all time

Brad: [00:13:22] Yeah, for sure

Nvidia acquired arm for $40 billion in cash and stock.

Will: [00:13:28] Yes. nVidia is the biggest semiconductor company right now. They're the biggest chip designer. Um, and it's, and it's important because the two parts of so Nvidia, we all know Nvidia cause they make video cards and they, they made the switch CPU, GPU, SOC you know, all, all of this other stuff.

Brad: [00:13:50] Games, mostly games. That kind of applications is where they have been known for ever

Will: [00:13:55] Right. But, but the games are kind of the secondary part of NVIDIA's business these days. Cause it seems like they're, they're trying to be the, the hardware that runs the machine learning infrastructure of our future of future compute.

Brad: [00:14:10] like I kind of had an inkling of this before, but like, this deal has made it very obvious that Nvidia is kind of leading this whole other life. I feel like, like they kind of, they kind of have a double life now. Right. Of like everybody knows them for their gaming applications, but like they're getting into data centers and machine learning and potentially some other stuff off of this deal.

That is, it has nothing to do with games.

Will: [00:14:31] Well, they've, they've been in that for like it's it's there's no, there's a very good reason that they released ampere first as a data center product, not as a compute re machine learning, compute product, not a gaming product. Um, so arm, if you don't know what arm is. Arm is a chip designer, a design company out of the UK, uh, that basically spun out of Apple and acorn, I think, uh, or Apple and acorn started it.

Uh,

Brad: [00:15:02] I didn't know that

Will: [00:15:04] hold on. I want to make sure I got that right.

Brad: [00:15:07] I had no idea arm holdings or arm limited. Now all is the.

Will: [00:15:12] Uh, yeah. So acorn computers developed the acorn risk machine architecture in the eighties. And then Apple put money into them.

Brad: [00:15:20] that's where arm comes from.

Will: [00:15:23] Uh, but now arm stands for advanced risk machine rather than acorn risk machine, which is what it originally did. Anyway. The point is they designed CPU and then license that technology out to everybody.

And, and they, um, unlike Intel, which say designs a CPU and then makes that CPU and then sells it to you

Brad: [00:15:42] That's the only way you can get it.

Will: [00:15:43] Yeah, that's it. That's it arm says, Hey, here's all the bits that go into our CPU, these assets, there's these systems, a lot of our chip SOC um, you can get whatever you want and you can take the pieces you want.

And you can like, if you want this, here's a package. Here's the, the overall. Box, and then they, you can kind of Chinese menu it out where you say, I want one of this math, coprocessor, and six of these linear math, X accelerators. And you can take the bits that they give you and assemble them. However you want, like

Brad: [00:16:12] like, and like, if you're Apple, for example, you can be like, ah, you can keep your GPU. We have your own why we have our own now. And we're gonna put our, we're gonna put our own GPU on the die with your, your arm stuff.

Will: [00:16:22] Exactly. Um, so, so you're licensing an instruction set. And as in a chip design is my understanding with arm.

Brad: [00:16:29] Yes. I wrote, I started writing the notes. Like, I don't know that either of us are equipped to get into this or if we even need to the whole kind of like Cisco versus risk. Dichotomy,

Will: [00:16:37] We talk about it a little bit. I,

Brad: [00:16:39] like I have, I've been, I've been interested in what it means for 25 years, but I've never a hundred percent gotten my head around it.

Will: [00:16:45] yeah. Uh, so. Um,

Brad: [00:16:48] or should we even, should we even like talk about what an instruction set is for people that don't know?

Will: [00:16:53] so

Brad: [00:16:53] essentially, it's essentially the very base level operations that CPU is carrying out inside itself. Right.

Will: [00:16:59] pretty much. And, and so at one level there's like the turing, the set of turing, the things you need to be turing complete in the computer, which is like, you know, ands and ORs and Knorrs,

Brad: [00:17:10] you're talking Alan Turing not the turing GPU.

Will: [00:17:14] I'm talking about Alan Turing, who the turned GPU is named after

Brad: [00:17:17] Yes. Yes. Just, just to be clear

Will: [00:17:18] Um, so, so at like the low level, all the math that happens in the computer is like four or five different kinds of mathematical operations

Brad: [00:17:25] Right Right

Will: [00:17:26] But when you go up a level, then you start to build specific functions on the chip that are hardware accelerated that makes certain types of math faster.

So like the defierance between interger math and floating point math is a good example of that. If you think back to the old 386, 387 days, Intel said, okay, these are mostly integer processors. And if you need floating point math, then we'll, you know, you can buy this extra 

Brad: [00:17:52] we'll, we'll sell you. We'll sell you a, a math coprocessor that you can stick on your board. Then eventually that, that logic started being included just inside the CPS. Cause it made sense.

Will: [00:18:03] Well, because it turns out that we're a buttload of applications for floating, maths, massive amounts of floating point math like games.

Brad: [00:18:10] Like, I don't like, I don't, I don't know if, I don't know if I'm, if this is illustrative or not for people who aren't computer scientists, which I certainly am not, but for example, you can, like, you can go on Wikipedia and just pull up a list of every instruction in the x86 instruction set, for example, and, and you can scroll down and it's literally just like, Like there's an operation called add, which adds  there's like add with carrry, there is convert byte, two word, compare, operands, convert, word to double word decrement by one, signed, multiply.

You know what I mean? Like those are just to be, to clarify for my own benefit, as well as anybody listening. Those are, those are fixed functions that are implemented in the hardware of the chip. Right.

Will: [00:18:52] When you were talking about somebody writing machine code, they are writing code that calls these specific instructions. Right? Um, so

Whereas if you write a C program, you write a program that has some basic human readable logic that isn't just a series of math instructions that then becomes translated into that by a compiler.

Brad: [00:19:11] So, yeah. So like, so everything from, at the end of the day, like everything from the highest level, like Python or JavaScript or whatever, down to C. So any compile language, like at the end of the day, all is actually happening on the CPU. Is these instructions being exicuted

Will: [00:19:25] It's interpreted down into this stuff, yes

Brad: [00:19:26] It's just that the compiler is what handles, turning the human code into machine operations and okay.

And

Will: [00:19:33] So in, in the nineties or late eighties, I guess, uh, the, the CPU architectures had gotten large and the instruction, the instruction sets were increasingly 

Brad: [00:19:45] complex, you might say.

Will: [00:19:46] Well, that was, that's a retro Nim. You know, the, the complex was a response to reduced instruction set computers

Brad: [00:19:52] Oh, really.

Will: [00:19:53] yeah. So, so these guys, uh, I think at the MIPS, people came up with this first, but it seems like it was maybe developed in parallel multiple places

Brad: [00:20:00] What really likes the concept of risk?

Will: [00:20:02] Yeah, if we,

Brad: [00:20:03] I think I read this morning. It started in Berkeley. I think it started in academia.

Will: [00:20:08] uh, well, anyway, but the point is rather than have a large, complicated set of instructions that did a bunch of accelerated things. They were like, why don't we just make all the, all the processes run faster and keep them simpler. Uh, and I think specifically a fixed length. So where complicated instructions could use multiple cycles per compute.

These, these are the same size instructions that run it the same speed. So you can do things more

Brad: [00:20:36] Okay. Okay. Like the,

Will: [00:20:37] We're getting into science stuff that doesn't matter here.

Brad: [00:20:38] Yeah, yeah, yeah, yeah. We're in, that's pretty far out of our depth. That's what you're saying. Like the instruction has to like execute and complete within the same cycle

Will: [00:20:46] Or within a specified number of cycles. The other thing that is important to know about all this stuff, cause this was like in the, in the nineties when Mac and PC people were crazier and we'd yell at each other about which computer was better, you'd hear all these risk versus cisc back then there was an appreciable difference,

Brad: [00:21:04] A lot of power PC. Like people would wear power PC T-shirts and stuff like that. Back then.

Will: [00:21:08] Look, people wore all sorts of bad. T-shirts like I had one of those colors shifting t-shirts. We don't need to talk about sartorial choices. The point is. In the modern context, there isn't really a Cisc versus risk anymore. There are risks. CPUs risks, instruction sets with complex elements and they're complex instruction sets with the risk like elements.

They're all. Everything's mishmashed together now.

Brad: [00:21:35] Multiple people who absolutely know what they're talking about have told me basically that like, yeah, even a, even a moderate x86 CPU has risk, like elements inside it.

Will: [00:21:44] Well, I mean the, the point, the big, the big thing, we should do an episode about this. Cause it's really interesting, but the Pentium four was a turning point for all of this stuff. Cause the Pentium four, they were like, we can make these really crazy complex, super deep pipelines and we can predict what's going to happen fast enough that we can run it at like.

Five six gigahertz. It's going to be so fucking fast. It's going to blow your ears off. It's going to be the most amazing thing ever. And then it didn't work. And that was the moment that like the Cisc wave broke and they were like, well, maybe we should simplify this whole thing and make it just, you know, parallel, real parallel.

So anyway,

Brad: [00:22:16] so I mean, back to, I guess, back to the actual topic at hand, moving back to the arm stuff, um, my understanding of why arm is so big in the mobile space is that primarily it's because it's so power efficient. The performance per watt is very good. Is that, is that a direct result of the way that the risk stuff, processes, instructions,

Will: [00:22:33] I don't know if it's a direct result of the way, the risk stuff, processes, instructions, or a direct result of the design goals of the risk process of the arm processors over over the 

Brad: [00:22:43] I see. I see. Okay.

you can make a really power hungry risk chip, just like you can make a really efficient cisc chip, you know, like there are some 0.3 watt.

TDP, I x86 processors,

Okay.

Will: [00:23:00] but

Brad: [00:23:00] You're just, you're just not getting a lot of performance out of them necessarily

Will: [00:23:03] well, not even that people don't use them for the types of things that you use arm processes for, because the market has gone in the arm direction

Brad: [00:23:10] That's what I mean, like that's why, that's why there aren't phones with x86 mobile chips in them.

Will: [00:23:14] mean, there are you just don't. Nobody knows what they are. Yeah. Yeah. You can buy x86, x86, Android phones, or you could for a while. I don't know if he's still can.

Brad: [00:23:22] That's that's bizarre.

Will: [00:23:23] Um, so, okay. So SoftBank bought arm a few years ago. SoftBank is a investment, uh, holding company from Japan, uh, who invested in such companies is wework, tharanos and uber

Brad: [00:23:37] Oh, no. Oh no,

Will: [00:23:38] uh, they, uh,

Brad: [00:23:39] That's a questionable portfolio.

Will: [00:23:42] look, they look, they've done some stuff that worked out for them, but like it's not, they've not had great times the last few years.

It seems like. And, uh, they. Sold their stake in, in arm for $40 billion in stock and cash to Nvidia, or they try, they want to, they've announced that they want to, this is a huge deal. Um, it is interesting because they didn't talk about it with investors or. Regulators before they announced that they want to do this.

And there's a, there's a pretty heavy regulatory burden here because, uh, the UK, the us China, and maybe even Japan are gonna care about this and

Brad: [00:24:20] And the EU, right?

Will: [00:24:21] uh, Well EU no, cause it's UK. So probably not the EU, by the time the deal has to pass regulatory burdens. but, but yeah, like it has it's, it's going to be a while before this is.

Active, uh, you know, this is an announcement of intent, not an announcement of success.  Um, it's interesting and it's important because there are literally billions of arm chips shipped every year made by Apple Qualcomm, Samsung, Nvidia, and a bunch of other smaller companies that you probably don't know about.

Brad: [00:24:56] Yeah. Sure. So, should we just rattle off kind of the, the most of the typical devices you would find armed ships in it's like every it's every iPhone on the market, every, just about every Android phone on the

Will: [00:25:05] Almost every Android phone.

Brad: [00:25:06] just damn near every tablet on the market of any kind, the switch

Will: [00:25:10] well, no, no. Cause like surfaces and stuff like that are all x86,

Brad: [00:25:13] well, yeah. Yeah, but that's, I, I feel like they're sort of stretching the definition of tablet.

They're like, that's moving, that's almost moving into computer. Laptop E kind of space,

Will: [00:25:21] I mean it is a laptop? It's just a laptop without a keyboard attached

Brad: [00:25:24] yeah, like the surfaces are weird hybrid devices that are, uh, but, uh, but, but your app, like the iPad, the, all the Amazon fire

Will: [00:25:31] Oculus quest,

Brad: [00:25:32] Yes. Uh, the switch, the Nintendo switches arm based,

Will: [00:25:37] uh, some routers, which is shocking

Brad: [00:25:40] yes. Oh, there the raspberry PI like all practically practically all of the, the entire.

Will: [00:25:44] The mister. 

Brad: [00:25:44] Well, yeah, the Mister has an arm core on it. In addition to the FPGA, pretty much the entire single board computer scene, like the raspberry PI and the, all that stuff is arm based.

Will: [00:25:54] I haven't torn one apart to see or looked at the tear down, but my guess is like stuff like Alexa's and Google homes and all of those, like nest screen things, your video doorbells, a lot of your web kit, your internet connected webcams, um, uh, any kind of internet of home smart device probably has an arm chip in it.

Brad: [00:26:14] Yeah.

Will: [00:26:15] Um, internet of things, smart device.

Brad: [00:26:17] suffice to say if it is not a, if it's a and Intel or AMD made x86 based device, like the odds are its arm.

Will: [00:26:24] Well, none of there's also MIPS, so there, there are other,

Brad: [00:26:28] Well I didn't want to get into that, that's kind of an edge case. Like my router has a MIPS chip in it.

Will: [00:26:32] So a lot of networking stuff has MIPS chips cause they're cheaper.

Brad: [00:26:35] In fact, we'll get to that. That is a topic to talk about later, but, but arm is extraordinarily widespread is the point

Will: [00:26:40] Also , do you know what else has arm the world's fastest supercomputer?

Brad: [00:26:44] Oh really?

Will: [00:26:46] so super computers and data centers have buttloads of arm chips now

Brad: [00:26:51] Okay. So that is a relatively recent development, right?

Will: [00:26:55] last five, 10 years, probably.

Brad: [00:26:56] yeah, yeah.

Will: [00:26:57] Uh, and the reason, the reason that happens is because in a data center, in a supercomputer context, uh, well, okay. Data, the two different use cases, data centers care a lot about, uh, performance per watt and performance per uh, both in terms of heat generated in power consumed.

Brad: [00:27:13] I feel like you hear a lot about the cooling needs of a data center, right? Like to the extent, to the extent that that sometimes helps dictate where they're built and stuff like like that 

Will: [00:27:21] a reason that data centers get built on the Columbia river in Oregon. And it's because there's a lot of hydroelectric power and a lot of cool water. Um, so, so yeah, you see, um, Basically that's where we're at is, is like, there are some places where you want a big, giant x86 X, 64 CPU with, uh, uh, you know, hundreds of cores and the whole thing.

But also there are places that there that you're going to want something that's lower power and less, a little bit less capable maybe, but also does exactly what you needed to in the place. Now, the soup

Brad: [00:27:55] do you ever want to just hang out in a data center? Are you. Are you? 

Will: [00:28:00] They're really loud.

Brad: [00:28:00] Well, I'm sure you'd probably want some noise cans. I bet noise canceling headphones would work extremely well in that white noise kind of environment. But like, are you, are you as well? Cause that's, that's the ideal, that's the ideal use case for noise canceling headphones, but are you as big a fan of air conditioning as I am?

Will: [00:28:15] I mean right now, yes, I am a hundred percent of fan of air conditioning. I would love to have air conditioning and it seems like it'd be really nice

Brad: [00:28:20] cool do you think it is in your average data center? Don't you want to just like, don't you wanna just go in there and like lie down on the floor for awhile?

Will: [00:28:26] So I've never been in a data center I've been in, in supercomputer before. And it is, it is, it was not a comfortable environment for humans

Brad: [00:28:34] Is it two cold.

Will: [00:28:35] It was very cold.

Brad: [00:28:37] Are we talking, are we talking like convenience store walk in freezer type cold or what? Surely? Not that cold.

was not, no, it was not 40 degrees. It was probably 65.

Oh, that sounds pretty good

Will: [00:28:48] It was chilly

Brad: [00:28:49] That sounds pretty good.

Will: [00:28:50] it look. I was in Texas and it was great for the first minute or two. And then you're like, Oh man, it's kinda cold in here. It's you know, when you're walking through the, through the grocery store in the summertime and you get into the, like the freezer aisle or the open, the butter aisle, where that, where the refrigerators are all open, wasting a ton of energy every day.

It's like that one where you're like, Oh man, I wish I wasn't wearing shorts right now, but this feels okay.

Brad: [00:29:11] sounds great.

Will: [00:29:13] Yeah. Um, so.

Brad: [00:29:14] If anyone can get us into a data center, let me know.

Will: [00:29:18] Uh, the supercomputer angle is interesting too, because in supercomputers they use arm processors to reduce latency and, and like kind of traffic cop, a lot of the interactions between the other processors in the, in the, uh, in the, in the setup. Uh, and different supercomputers have different requirements.

Like I spoke to somebody who works on supercomputers that do fluid dynamics calculations, where like, they're trying to figure out how wind interacts with skyscrapers to, to affect weather patterns in big cities. And in order to do that, like every part, every particle that simulates a bit of wind has to be able to talk to every other particle cause they all interact.

And you end up with these massively complex situations where latency really matters. And if this, if one part of the simulation gets out of sync with the others, then the whole thing has to be scrapped and started over. So like they build these clusters so that, so that there's negligible latency from one side to the other and you need a bunch of CPU is like evenly distributed through the network to do that.

Brad: [00:30:21] To manage that. Am I right in assuming that maybe the arms cores in that scenario or not handling the heavy math of the simulation or they're just doing the, like you said, the traffic copping the bookkeeping to keep everything in sync and that sort of thing.

Will: [00:30:33] Probably some of both, but I don't know. I'm not an ex that's an it, that is not, that is a level of granularity that I do not have on this

Brad: [00:30:40] Yeah. Like, I mean, yeah, I'm just sort of like throwing that question out into the either, but I would wonder if they're using like some kind of ASIC or something like GPUesc almost to handle the heavy compute aspect aspect of that task.

Will: [00:30:53] So my understanding is that they're set up in a way that they, that, that you have. Processors that are tuned for specific things. So like you'll have x86 cores for doing lots of big floating pointmath 

Brad: [00:31:04] Right, right. That's what I was getting

Will: [00:31:06] like that is, that is this pipelineable and deep. And then you'll do GPU cores for massively parallel stuff where you need thousands upon thousands upon thousands of cores, not hundreds of cores.

Um, and then, yeah, I think the, ARM processors are going to be the, the kind

Brad: [00:31:21] The glue.

Will: [00:31:22] making sure everybody gets fed. Basically all the, all the different cores are getting, getting the data they need in the right time. Um, so, okay. So w like, why does Nvidia want arm?

Brad: [00:31:35] why does Nvidia want arm.

Will: [00:31:36] Well, Yeah. They, they like having data, uh, market share.

And, um, if you look at nvidia is right now, it's a video game graphics company. Sure. They are very much want to be the machine learning, AI hardware provider. Um, and, and you can tell that by looking at how they're pushing ampure, to workstations, a data center is you can look at the fact that they're putting 10,000 tents, uh, 10,000 shader cores on these GPUs with like dedicated machine learning units and, and arm.

Also, you can get dedicated machine learning acceleration on arm chips now,

Brad: [00:32:16] And then like, we should, we should mention like a lot of NVIDIA's own messaging around this deal pertains to AI specifically. Like they, like the line I pulled out here is that its arm is based in Cambridge and they said that they want to turn the Cambridge location into quote, a new global center of excellence in AI research.

Will: [00:32:33] Yes. Well, and it, like, if you think about NVIDIA's path over the last 20 years, you know, cause this is like a 24 year old kind of 23 year old company now that they. They looked at the PC graphics market and were like, look, we can build, we can start building these a there's a market for PC graphics, but B this market provides a path to massively parallel.

Machine learning accelerators, Ray tracing, accelerators, all of that stuff. And, and, you know, they, they use the market demand for games and 3d graphics to, to, to do, to pay for the R and D to get to the point that they had machine learning processors.

Brad: [00:33:13] Can we, um, can we just generally use the terms, AI and machine learning interchangeably in this context? Cause they used, they use AI in most of their messaging.

Will: [00:33:23] uh, I think AI is the consumer friendly term

Brad: [00:33:26] right. But the, the, the, just to be clear, they mean basically the same thing.

Will: [00:33:30] I mean, we could just call it, you know, linear, statistical, linear math, but it's, you know, people are going to be less interested in that. Um, but yeah, machine learning and AI essentially,

Brad: [00:33:39] I'm just, I'm just trying to clarify.

Will: [00:33:41] I, I would say machine learning is probably a subset of AI if we wanted to be highly technical, but I don't think it matters. Um, so. Uh, Nvidia has sold arm chips in the past. They, the tegra, uh, is in the switch. It was in the Nvidia shield and the shield pro I think they're high end,

Brad: [00:33:58] Yeah, it still is. I mean, yeah. I looked at tegra roadmap before this and like, they've got a new one on the way. Uh, of course, I mean, well, whatever, there are rumors about a new switch next year. So like there would be a new, a new one, a new one in that, for example. Um,

Will: [00:34:12] Uh, but well, but, but like they haven't made big inroads into the phone market. There were a couple of phones that shipped on tegra  a few years ago.

was 

Brad: [00:34:18] that razor phone on Nvidia?

Will: [00:34:21] I feel like maybe it's. It seemed like their performance per watt was not wait like th the, which in turn meant you had bad battery life on the phone.

Wasn't where you wanted it. Um, but it's been pretty good on like the switch while the initial launch switch didn't have the best battery life. It was, it's more than sufficient for like what I've been using, uh, over the last four years or five years, or however long it's been.

Brad: [00:34:46] So, so one, one thought I had about this acquisition, like obviously there's there's data centers and super computer stuff. There's machine learning research. Like there's a lot of applications for this, but like, Can you tell me if this is an accurate perception of the mobile market? Like Apple's arm based chips are like insane, right?

Will: [00:35:03] God tier

Brad: [00:35:04] Like, yeah. Like they hired, they went out and hired like some of the best ship designers in the world. Right.

Will: [00:35:09] And spent billions of dollars

Brad: [00:35:10] Because they have unthinkable amounts of money. But my understanding of the Android market is like the Qualcomms and like the Android or the arm chips in that space just don't keep up right with that.

Will: [00:35:21] I don't think it's that they spec up. Like, if you look at

Brad: [00:35:25] I'm not saying they're bad necessarily. It's just the,

Will: [00:35:28] Well, it's a different situation. Cause like Apple knows what their software team is doing and they have the software

Brad: [00:35:34] that's fair.

Will: [00:35:34] they turn their software roadmap to the hardware roadmap so they can like really get everything out of every bit of Silicon that they put in that chip. If, if, say, you know, if you're LG and you're buying a Qualcomm processor, You probably, aren't going to spend the billion dollars a year that you need to, to turn that off the shelf processor into a, the custom processor like Apple does.

Brad: [00:35:55] Totally. But, but the, the, I mean, the point I'm getting at is like to use the most recent example when the iPhone se came out, like all the reviews were just like, it is insane. The. Performance advantage that this budget Apple phone has over the current flagship Android phones. So what I'm getting at is like, I wonder if Nvidia maybe sees an opportunity to become like the dominant, uh, SOC supplier in the Android space off of this.

Will: [00:36:20] So my guess is that they're probably going to pull out of the Android SOC space as part of the regulatory burden for this deal to happen.

Brad: [00:36:29] Interesting. Yes, I actually, yeah, that's totally a good point. That's like antitrust stuff. Not withstanding. Could they possibly be looking to do that?

Will: [00:36:37] I think Nvidia is more interested in the data center and making sure that things like TensorFlow cores end up in other people's phones.

Brad: [00:36:44] okay. Cause I had that, that this, this exact idea was floated as well. And some of the Android press that I. But I looked at it or something going the Android media market. And like people speculating that, like, you know, there's kind of nothing stopping and Nvidia from holding the best of like, you know, they can keep licensing

Will: [00:37:02] I think the regulatory stuff in China and the UK. would probably be the thing that stops that

Brad: [00:37:07] the idea to expand on it is just that they could keep licensing the arm designs that they currently make to the same customers, but that they could like have a slightly competitive, a slightly more competitive. The version that they only

Will: [00:37:19] the good one for themselves.

Brad: [00:37:21] or something along those lines.

Will: [00:37:23] I would be shocked if that is allowed to happen in like in the U S I wouldn't be surprised if it's allowed to happen the EU and the UK. It, that's not gonna fly.

Brad: [00:37:32] Maybe, maybe this is just expressing my complete lack of faith in any kind of antitrust action to ever take place again,

Will: [00:37:38] Um,

Brad: [00:37:39] unless it's, unless it's politically motivated.

Will: [00:37:41] I think, well, here's the thing given that Nvidia has essentially no market share in that space right now, my guess is that they sacrifice the potential future there for licensing out instruction set, because here's the thing is if they do something to anti-competitive, there are other similar, but not say, you know, not directly compatible instructions at CPS out there,

Brad: [00:38:03] we will get to that

Will: [00:38:04] including some open stuff

Brad: [00:38:05] Yes. We, well that stuff. That stuff is probably, it's not even arm related necessarily, but that's probably the actual, most fascinating aspect of this topic, but we'll, we'll get to that.

Will: [00:38:13] Yeah. So, so I don't, I think my guess is that Nvidia will protect the switch and shield business for set top boxes and cause, cause honestly, like if you're thinking about Nvidia, if Nvidia is Integra problem is that they perform really well, but their performance per watt and battery life are bad.

Tablets make a ton of sense, like the switch, like the shield tablet set top boxes make a ton of sense where your performance per watt is less important as long as you can passively. Cool. It, nobody cares. So I think, I think we'll see them sacrifice the ability to be in phones. Cause it like, it's not like they're not, is it here's the question you need to ask is, is it better for.

Nvidia to make a little bit of money on every single one of the one or 2 billion phones that are sold every year, or to make a lot of money on some small percentage of the market because they piss everybody off and they all switched to MIPS or whatever. Um,

Brad: [00:39:08] a, that is an actual exact question I have when we get to the MIPS and risk five stuff, but I'll, I'll save, I'll save that for now. Um,

Will: [00:39:15] I think it's, like I said, I think it's much more likely that we'll see it go the other way, where Nvidia makes tensor cores and th and, and things like that. Like specific parts of the current GPU architecture, things like their H264 and H265  live  encoders, which are really good, um, make them available to arm licensees

Brad: [00:39:35] So what you're saying,

Will: [00:39:36] Along with the other IP.

Brad: [00:39:37] what you're saying is it's less likely than Nvidia is going to take the arm tech and kind of pull it behind their curtain. And more likely than some of the Nvidia tech is going to bleed back into the publicly available arm stuff.

Will: [00:39:46] I would, I would really, really hope that the regulatory one of the three or four regulatory agencies involved would ensure that that doesn't happen

Brad: [00:39:56] Right? So like an example is like arm has a GPU design that. They can license along with the CPU stuff, but, but like you're, you're, you're saying that like Nvidia video graphics could end up replacing that Mali GPU design,

Will: [00:40:09] Oh, yeah. Or even some, even some subset, like small bits and pieces. The other thing is like their power VR has a whole, uh, arm GPU device. Like they sell a buttload of arm GPUs too. Like there's there, there are, there have been three companies making arm GPUs for a while fourif you count Apple, even though they don't sell those to external vendors, um, Like, like, there's the beautiful thing about arm is if LG wants or Qualcomm wants, once NVIDIA's GPU, they can do that if they want their own GPU and they can do they, if they want power PC, they can do that.

And you can, you can assemble the chips. However you want. It's up to you. It's like Lego, but way more complicated and expensive to build. Um, so yeah, I like, I think that's it. I think I, and so the outside edge case for this is that I think we're going to see. Nvidia dropping arm CPU in weird places in the future.

Like we saw a little bit of this with the ampere stuff and the direct, the direct storage API to accelerate, um, storage, which in reality, isn't going to start happening for like two or three years because it takes the API as a while to roll out. Um, I think we'll see in the next five or 10 years, we'll see Nvidia dropping arm CPU use on GPUs.

To to reduce the, uh, simply because there's often math that you need to do in games that is latency depen. Like this is one of the benefits of consoles have, is that because of the console is use a shared GPU memory, CPU, memory pools, where the CPU and GPU are both talking to the same memory all the time.

It's really inexpensive for the GPU to pass something off to the CPU and then come back to the, to the GPU again

Brad: [00:41:51] Also the CPU and the GPU are on the same d. So there is

Will: [00:41:54] often. Yeah.

Brad: [00:41:55] there's a system bus to communicate over

Will: [00:41:58] Right. And whereas on the PC, the GPU has to talk across the PCI express bus, which like on a GPU that's running at 3000, you know, thousands of megahertz you're looking at sometimes five or 10 or 20 clock cycles on the GPU before a communication can happen across the PCI express bus anyway.

Brad: [00:42:16] My one question about that idea of including like arm. Designs on a graphics card, for example, is like, I mean, would you agree that Nvidia probably employees a lot of the best integrated circuit designers in the world at this point.

Will: [00:42:29] I, it would be hard for me to say that, but sure. The results are good.

Brad: [00:42:34] that's what I mean. I think the proof is in the pudding there, it seems, but like, do you think like a general purpose, like arm. Set up on a graphics card would make more sense that like I only back up what they seem really good at designing logic on their cards built for purpose, right. Or designed for purpose, like, like this, like this direct storage stuff.

Like you assume that there is hardware on those GPUs that is very good at decompressing that data, like, it, would it make sense to have a more general purpose unit on there as opposed to just designing whatever you need?

Will: [00:43:08] Look, it's not outside the realm of possibility that we're going to be plugging NVME drives onto your video card in, in five years

Brad: [00:43:15] Okay. So at that, at that point, a very small, low power arm core that handled that IO would make sense or something like that. I see. Okay.

Will: [00:43:22] Like, like.

Brad: [00:43:23] Like, that's kind of what I'm getting at the a couple of weeks ago when we did the, the G for the 3080 stuff. When I had mentioned, I mentioned that the graphics card is rapidly becoming like the central feature of a PC.

Will: [00:43:34] Hey guys, plug your mouse into your video card for the best performance.

Brad: [00:43:38] Like have, have your video card handle your storage IO now, you know, like at some point the graphics card is the computer.

Will: [00:43:44] well, I mean, that's what Jensen's been trying to make happen for 20 years.

That's 

Brad: [00:43:48] what I mean, like, it feels like we're moving toward that where like, what is this? I mean, obviously you need a CPU, but like it's increasingly marginalized. It seems like.

Will: [00:43:56] It was, it was literally a talking point, I think, around the, like the GeForce probably the 5,800 launch, 5,900 launch, maybe when they started doing compute on like general purpose compute on GPUs. He's like, yeah, this is going to be the most important processor and your computer from now on. And, and, you know, it turns out, probably sounded hyperbolic then, but it was probably correct.

Brad: [00:44:18] Like I I'm trying to, I'm like, I'm trying to conceptualize in my head, the weird fuzzy shifting line between the graphics card and like, say the motherboard, like. Like for, for expansion purposes, it would never make sense to turn a graphics card into a motherboard and literally have the GPU on the board plug a CPU into it.

But like, it feels like we're kind of like that that distinction is, will make less meaningful. Does that make sense?

Will: [00:44:44] Yeah. Well, when you look at those big compute clusters, where they have like a huge board with a shitload of PCI slots and lanes, and then you just drop eight boards into it with one CPU, like the, the, the amount of power that goes to doing the GPU math versus the CPU. Math is, is strongly, strongly, strongly in favor of the GPU side.

It's a weird, like computers are going to be real weird. And about 10 years, that's the, that's the TLDR. 

Brad: [00:45:11] That's kind of what I'm getting at, you know, like at some point, if the graphics card is handling 90% of your system functionality anyway, why not just make that the whole system.

Will: [00:45:18] Well, yeah, I mean, it's, I'm interested that there are people in our audience. I know who are further into this down this rabbit hole of what computers look like in 10 years than we are 

Brad: [00:45:30] literally, I literally,

Will: [00:45:32] I'm curious to hear their reaction to this 

Brad: [00:45:33] Yeah. I I literally copied and pasted a conversation out of the discord a few days ago about like where CPUs are going in the next 10 to 15 years that like frankly made my head exploded where I could some really wild stuff in there.

Will: [00:45:47] I mean, and this isn't like this isn't, it's not idle speculation. Like look at what a computer looks like. In two, in 2000, when I moved to California, I think I had a dual core. A Celeron machine. So two physical packages, J socket into two sockets on the motherboard and. Like the difference in that computer and the computer that is in my pocket now is astounding.

Brad: [00:46:12] The orders of magnitude,

Will: [00:46:13] yeah, it's because of arm. Um, and like everything, the wireless internet connection is better than my wired internet connection was. My screen is way higher resolution. I can do more things at the same time that I could then like it's it's yeah. It's bonkers. The whole thing is crazy.

Brad: [00:46:29] Yeah. The, the acceleration of development of technology has been pretty dizzying for me, not to,

Will: [00:46:35] And yeah, a lot of that is on arm.

Brad: [00:46:37] Um, not to get off on some old

Will: [00:46:38] of, uh, the rise of phones that move billions of units a year, like , left the opportunity to really push the technology because there was a lot of money at stake and, and yeah.

Brad: [00:46:52] But just, you know, I mean, not to get off on some old man rant, but like, if you, if you started on, if you started out with parallel ports and text mode, only text mode only displays and that sort of thing, like where we are now versus then is like quite frankly, hard to grapple with sometimes.

Will: [00:47:08] It has been a long road.

Brad: [00:47:09] Yeah. Uh,

Will: [00:47:13] I mean, I, you remember, I remember taking, you know, hours to download a one megabyte driver.

Brad: [00:47:20] yeah,

Will: [00:47:23] Uh, so Brad, what do you think this means for phones? Yeah. Oh God. I never had a 300 bottle. I was, I started with a 14.4 I think,

Brad: [00:47:28] Yeah, so did I, I didn't, I didn't either. I just, I know it existed. I'm sorry. What were you asking?

Will: [00:47:33] Uh, look, I, I couldn't hear you. My audio coupler was, came off the phone, so, uh, we gotta resync this modem call.

Brad: [00:47:40] Jam the thing back on there. You gotta to rerun your INIT string

Will: [00:47:43] Um, what does this mean for phones?

Brad: [00:47:46] For phones.

Will: [00:47:47] For phones

Brad: [00:47:47] Oh you mean like right now?

Will: [00:47:49] Yeah. Right now Nvidia is buying arm.

Shit. Am I, is my phone going to be bad now? Ah,

Brad: [00:47:53] We We covered my apparently abortive idea that they were going to try to dominate the Android space. So probably not a ton at the moment. Maybe, maybe the Android, uh, like you said, the Android. SOC get better GPUs or something like that, or more machine learning, acceleration, something like that.

But like,

Will: [00:48:12] That'd be my guess.

Brad: [00:48:13] maybe not five, but like that's probably like minimum three, four years away from showing up and shipping products, right. With the regulatory overhead and the R and D to product cycle lead times and stuff like that.

Will: [00:48:25] I think they suggested this is going to take 18 months to get approval and maybe close. So, yeah, I think my, my assumption is my assumption is there's not going to be too much work happening by other parties on anything that comes out of this until that deal is closed.

Brad: [00:48:42] This is a, here's like a really high level legal and business question. Is there anything to stop them from beginning, R and D with this purchase in mind, prior to the deal being fully approved, like.

Will: [00:48:53] I have no idea.

Brad: [00:48:54] right. Like, I don't know. I quite literally don't know anybody who would know the answer to that question.

We need, like, we need like high level, like IP, lawyer types and regulatory, like antitrust law types to answer a question like that. Right.

Will: [00:49:08] The amount of anxiety I had when I had bought a house, but hadn't closed on it yet and had gotten the keys and started knocking holes in the walls was very high. I can't imagine.

Brad: [00:49:17] That's literally that question, right? Like, are they, is it okay for them to start knocking holes in the drywall before the.

Will: [00:49:24] I don't, I don't, I honestly don't know

Brad: [00:49:25] Before they've got the deed in their hands?

Will: [00:49:27] I mean, do they exchange IP? I guess? I mean, probably not because it's all proprietary and secret.

Brad: [00:49:32] Yeah, you're probably right. You're probably right.

Will: [00:49:34] Um, I mean, but I assume they're talking, I assume like they start, I assume anytime you have, like, I know in the media spaces, cause I've worked with the people who went through a giant merger recently, like they were talking to, everybody was talking to everybody, but nothing was happening, but everybody knew what was coming.

So I'd say that's probably what happened. I don't, I don't know. I'm curious to hear what people think. Like

Brad: [00:49:53] do you ever

Will: [00:49:54] If you're at Nvidia, send us an email or signal me.

Brad: [00:49:57] do you, do you, do you ever have this tendency when you see a deal of this magnitude? I don't know what I mean, like you said, this is like basically the practically the biggest deal that's ever been made. Do you ever have a tendency to look at that and try to ascribe some kind of like organization and method to the whole thing?

Like think like, like, Oh, there must be a plan like fully written down and they know exactly what they're doing and they're going to execute it. But like then what you just described as the actual reality of it's actually just a bunch of human beings flailing around. Trying trying to figure it out for a period of months to years.

Right,

Will: [00:50:25] You don't spend $40 billion if you don't have a plan for what you want to do with the, with the thing you're buying,

Brad: [00:50:31] right, right. But, but it's not all set in stone.

Will: [00:50:33] Yeah. I mean, I think, I think everything's fluid and like there's some bits of the arm business that are getting spun off that Nvidia didn't want stuff like that.

Brad: [00:50:41] There's I mean, there's, there's much more of a, a human, like a, in the moment kind of human elements, decision making. Right. Then you might think.

okay. Yes. 

Will: [00:50:51] Um, I I'm I'm like I was a little $40 billion is a staggering amount of money. Given the scale of that arm business, I was a little bit surprised that it was only $40 billion. Like arm when arm sold a few years ago for 20 or 30 billion, whatever it was, I was kind of surprised that it was only, like, I assume that the licensing fees aren't onerous per chip.

Brad: [00:51:20] Okay. Well that is a good segue into the next topic then. Cause I was wondering about that exact thing.

Will: [00:51:26] Yeah. I mean, I I'm, I'm curious. I mean, these are very not public numbers. It's, it's funny. It's it's even hard. I was trying to figure out how much. Of the global micro processor business, like across all architectures and all, all markets, arm representative. Cause I, I, it has to be, uh, significant portion.

Um, but I like that stuff is even all buried in right. Really expensive analysts reports about microprocessor. You know, like if I wanted to pay five or $6,000, I could have gotten a report that would let me know exactly how much microprocessor. A business arm gets and what their licensing model looks like, but I didn't spend that five grand Brad.

I'm sorry

Brad: [00:52:08] Sounds, it sounds like we need a $5,000 tier on the Patrion next month to get the analyst report in our hands.

Will: [00:52:13] do you give us the 5,000 bucks? We'll buy one analyst report a month

Brad: [00:52:16] all right. That's all

Will: [00:52:17] and we're putting it right back into the pod.

Brad: [00:52:20] you could, you could just send us the report if you have it. I won't tell anybody.

Will: [00:52:23] The tech@content dot town.

Brad: [00:52:26] Um, so yeah, I guess that brings us to kind of the last big topic here, which you have written these notes as are there alternatives to arm for phones?

Will: [00:52:35] Not really right now, but maybe soon.

Brad: [00:52:37] yeah, it seems like it's getting there.

Will: [00:52:38] I mean, Intel has been trying to push the x86, that part for phones for a really long time in nobody's biting.

Brad: [00:52:44] luck with that. As we said,

Will: [00:52:47] but, but I think, uh, MIPS and risk V have both like

Brad: [00:52:52] risk five.

Will: [00:52:55] It's a risk five.

Brad: [00:52:56] It's risk. It's the fifth. It's the fifth. Yeah. I, I agree. That sounds better, but it's, it is the fifth generation of risk.

Will: [00:53:02] if it was full of why is the hyphen in there? If it's five,

Brad: [00:53:05] according to them. It's right there on the Wikipedia page,

Will: [00:53:07] I hate this.

Brad: [00:53:08] don't know what to tell you.

Will: [00:53:09] I regret. Look, I was pro risk. So risk five is, um,

Brad: [00:53:13] Risk V pronounced risk five is literally how the Wikipedia page starts that also came out of Berkeley actually. So I looked into it while we were talking about it. We were both right earlier that the concept of risk started at Berkeley, but then MIPS was the company that spun off of that research and was the first commercial entity to offer a risk.

Uh,

Will: [00:53:34] like, it seems like there was parallel development happening in other places at the same time. Cause like DC had a MIPS chip and all or risk chip and all that, um,

Brad: [00:53:42] that like how calculus was created by more than one person

Will: [00:53:44] Linus and Newton at the same

Brad: [00:53:46] Newton. Yes. Um, but, uh,

Will: [00:53:49] Newton got the win though cause he spoke English. It turns out. Um, but he, he had a better publicist, I think

Brad: [00:53:55] Yeah, that whole Apple thing is very easy to remember.

Will: [00:53:57] Yeah. Also apocryphal

Brad: [00:53:59] Yes, I'm sure. I'm sure. But so, you know, it was a little bit of public, mythmaking goes a long way in getting people to remember you.

It turns out. Um, so yeah, which one do we want to talk

Will: [00:54:09] so,

Brad: [00:54:09] kind of the same

Will: [00:54:10] five will risk. Five is going to be is an open, open source,

Brad: [00:54:14] It's, it's literally, it's literally an open source instruction set. The way that open source software is,

Will: [00:54:19] Yeah.

Brad: [00:54:20] free, freely available.

Will: [00:54:21] you can take that instruction set and add like a few billion dollars and hundreds of people and maybe thousands of people. And you can have a CPU in a few years

Brad: [00:54:31] Yeah, I guess that's the big difference here is that like, um, a single programmer can take an open source repository and do something cool with it. And the barrier is very low.

Will: [00:54:40] Depends on the repository, but yeah

Brad: [00:54:41] Well, you know what I mean, though? Like, like a single person can develop software pretty effectively, depending on what they're doing and how skilled they are, but like to take an open instruction set and actually turn it into a tangible product takes like unthinkable unthinkable amounts of money and skill to design.

The chip, like getting chips made is incredibly expensive and difficult, you know? Like it's not like, like, you know, some little garage startup is not going to take this and change the world, right? Like it's

Will: [00:55:06] I mean, they might, it's not, that's not out of the question. Like it's, it's not, um, it, this isn't. I still firmly, you know, that that's a, that's a really good question. Like, is there, is this, is, is this still a market that a garage startup, or even like a billion dollar startup can, can compete in?

Brad: [00:55:25] Maybe it's the, the thing that, that, uh, stuck out to me in reading about risk five is it's something about arm actually, that I didn't really, it had never occurred to me, which is that like risk five being open. You can take the literal base instruction set and do whatever you want with it. Uh, I did not realize that an arm license is not anywhere near that permissive.

You're essentially like you can, that you can extend the arm designs, but not that much in comparison. Does that make sense? Like, it sounds like you're largely kind of taking the cores that arm offers with some customizations that you can specify, but you can't like, you cannot just take the arm instruction set and license it and then make your own chip from the ground up with it

Will: [00:56:03] you can do that. That's what Apple did specifically. So, but,

Brad: [00:56:07] Okay. There's this thing I was reading was wrong about that then.

Will: [00:56:10] Well, well, what you're saying is you can't license the instructions for arm and, and then, you know, build your own chip. They'll like, like a lot of businesses, if the price is right, you can do what you can do. The thing you're looking for.

Brad: [00:56:27] Sure. But like, but maybe that's like written into the licensing agreement and then it costs more or something like that. Right? Like that's what I'm getting at is like, it's not, you're not just going to pay arm a flat rate, take those instructions and then go off to the races and do your own thing.

Will: [00:56:38] Yeah. If you're, if you're, if you license the cortex a nine and you're licensing the chunks that make up the cortex, say nine, and then how you assemble them is up to

Brad: [00:56:45] That's what I'm getting at is you, whereas the, the difference is with something like a risk five, or actually now MIPS, which we're getting to. You can take these literal instructions on like build your own CPU core from the ground up is my understand is my understanding.

Will: [00:56:59] And that is so. I'm going to be interested to see if open source projects can handle this. My guess is that probably they can't like, it seems like too big of a thing for the typical chaos of an open of a big open source project.

Brad: [00:57:16] I can see that.

Will: [00:57:17] Um,

Brad: [00:57:18] And so it should, should we

Will: [00:57:18] I also don't know

Brad: [00:57:20] Should we mentioned, you mentioned the MIPS stuff before we get too much

Will: [00:57:22] yeah, the MPS stuff. The MIPS stuff is interesting.

So,

Brad: [00:57:26] they have opened, they have opened sourced, their architecture, which like you said, has been around since the eighties, right? Like MIPS. My router has a MIPS 64 CPU, for example.

Will: [00:57:36] Yeah. Like a lot of networking hardware is powered by MIPS

Brad: [00:57:39] So, so it's really interesting, especially interesting in the context of my router, the Edge router four is like I SSH into it. It's just running Debein. , but it is a, you know, it had obviously by nature, it has to be a MIPS64 compiled version of Debein. And so like when I'm going on, when I go on there and add like, for the APPs repository to look around for like software to install on there, like there, there have to be MIPS, 64 compile versions of those of those.

Those binary's available to actually do anything with them. You know what I mean?

Will: [00:58:09] It's um, it's like, It's an interesting, like it's, if you look at ARM as embedded processors, which is what we would have called these like 10 years ago, um, MIPS is the original kind of embedded processor. Right. But, but I think there was even a MIPS version of windows NT up, up until four.

Brad: [00:58:28] Sounds right. Cause I think they were in workstations back then. Right.

Will: [00:58:32] Yeah. They, they were, they were early risk chips and people were excited about them. So they ended up in weird places.

Brad: [00:58:38] So this article, this article you linked to basically makes us sound like MIPS just lost dramatic amounts of market share at arm over the years.

Will: [00:58:45] Well, that's the, so the challenge is that as arm has developed more performance per watt for higher end CPU for higher end SOC for phones, they, they still have like a really deep well of designs that go back, you know, and can scale down to less capable devices that don't need it don't need the same kind of hardware.

Um, but yeah, they still, their 10 billion chips chips that use MIPS, I think, is what they advertise

Brad: [00:59:09] that's, that's hard. That's not exactly nothing.

Will: [00:59:12] Like a small chip manufacturer is still is selling billions and billions of chips

Brad: [00:59:17] Okay. Okay. That's fair. But, but the point is they have now opened their architecture,

Will: [00:59:21] Yeah.

Brad: [00:59:22] just like risk five, except they are a commercial company that has also made this open.

Will: [00:59:27] So yeah, like th there are, so the takeaway on this, if the, if the question we're answering is, are there alternatives for arm, for phones? The answer kind of not right now, but maybe in the future,

Brad: [00:59:39] they're, they're available. It's just, nobody has done the insane amount of leg work. It would take to actually implement them in that role.

Will: [00:59:45] Yeah. And in the timescale, like if we're looking at, you know, two to three years to close this deal, is there a way like, will there can, could Apple switch from arm to MIPS or arm to risk V five, whatever. Um, it could they, in the time, like, I wonder if, if Apple would have committed to full arm after this deal closed.

I know that's been a thing that's in an, obviously in the works for 10 years on their side, but Apple and Nvidia have a kind of an amicable relationship that that is not been great. And I'll be interested to see how they react to this.

Brad: [01:00:23] Yes. So that was like, when you mentioned like maybe the arm licensing fees are not actually that high. Like that was the exact question I wanted to address with this. Was would Apple switch, is it worth it like, you know, Apple being like one of the most cash rich companies in the world, right? Is it and?

And with the number of devices they sell, like are the licensing fees on arm just low enough that it's not worth the investment for them to switch to a licensing free kind of

Will: [01:00:47] So there's multiple components. One is that arm has been optimized relentlessly for the last 13 years in a way that these other architectures have not like they're not as fully understood. They don't know where the slow and the fast where the bottlenecks are. They don't like we don't have institutional knowledge and how to design really fast, capable MIPS, processors, or risk five processors for phones and other tablets and, and set top boxes.

Brad: [01:01:15] I'm just, I'm just throwing this out there, but this would probably be at least like what a decade long undertaking to implement.

Will: [01:01:22] Yeah. My guess is that Apple looked at the power performance curves on arm in like 2007 when they started selling iPhones. And they were like, Hey, you know, if this continues at this rate, the Intel line and the arm line are going to cross in about 20, 20, 2018. And at that point we could start not doing Intel chips anymore if we wanted.

And that's been the process this whole time. That's been in the works this whole time. So yeah, it's a, you're looking at 10 years from now, from the starting line to when you start seeing viable competitors shipping and like everything else, you'll start seeing like weird dev kits two or three years before the actual viable products get done.

Like, you know, if you think back to that first Chromebook, Intel released, it was slow and useless. And then two years later they're pretty nice and, and worth having.

Brad: [01:02:10] Yeah. But some of the, just the, a little bit of been reading about these open architectures, like analysts in that space seem pretty bullish on their future though.

Will: [01:02:19] I would, I think that's, I think, I think anytime you get a monolithic company, like the people will be hedging by we'll be investing in these as a hedge against nvidia and against Nvidia behaving badly.

Brad: [01:02:34] got, we've got one gigantic company now will controlling a frightening amount of this space. Uh, these are just alternatives that might become more relevant in the future.

Will: [01:02:45] So that's, I guess that's it.

Brad: [01:02:47] Yeah. It's a fascinating topic. It's like, we're obviously not neither hardware designers, more programmers.

It's like

Will: [01:02:52] Nor, analysts,

Brad: [01:02:53] heady heady stuff here. And we're just kind of trying to make. Sense of it as best we can, but I think it's fascinating

Will: [01:02:59] look, man, it's a confusing world and we just wanna, we just want to figure out what's going on out there.

Brad: [01:03:03] Yeah.

Will: [01:03:05] Um, We've reached the portion of the show, where we think our patrons, uh, the currently 1090 people who like what we do enough to give us money or want to get access to the discord, or want to hear the patron exclusive episode

Brad: [01:03:21] Yes. Yes. We're just want us to continue and continue to keep doing this.

Will: [01:03:25] yeah. Uh, you can find out how to support the patrion @ tech patrion.com slash tech pod. Uh, next week is our email episode. Yeah.

Brad: [01:03:35] Okay. That's good. That

Will: [01:03:36] means of the month

Brad: [01:03:37] I've got one more episodes worth of buffer before I have to finish that book.

Will: [01:03:42] Okay. You got two weeks, maybe three, depending on whether we do. Yeah.

Brad: [01:03:46] It's good to know what kind of timeframe you're working with.

Will: [01:03:48] Look just Sunday afternoon, Brad. Um,

Brad: [01:03:51] So need to find a Sunbeam and kind of relax in it.

Will: [01:03:55] Uh, but, but yeah, so we're going to, I'll open up a Hitchhiker's guide to the galaxy channel, uh, in the discord this week. So if patrons want to discuss it, then they will be able to, um, we've had a lot of as always lovely conversations. I learned something new every time I dip into the discord. Um, and like I learned about Panorama stitching of.

Of, uh, taking, uh, how, how photography on Mars works the other day, which was amazing. Uh, one of, uh, just, just talking about how the Panorama stitched and how you do like self inspections of the wheels and the Rover and all that stuff. It was awesome. Super cool.

Brad: [01:04:33] We have someone there who works on it.

Will: [01:04:35] Yeah. Yeah. Um, so. Uh, if you want to find out more about how to join the Patrion or back to show it's two bucks to get access to the discord.

And you can find out about that at patrion.com/techpod, uh, as always thank you to our executive producer level patrons, Andrew Cotton, David Allen and Jacob chapel,

Brad: [01:04:56] Yes. Thank you so much

Will: [01:04:57] uh, and everybody who supports the show.

Brad: [01:04:58] Yes. Thank you everyone.

Will: [01:04:59] Um, and if you can't support the show financially, tell your friends, we had so many good tweets last week.

I had to stop retreating them all because I didn't want to be spammy. Um, but I, we appreciate you

Brad: [01:05:11] lot of friendly and flattering tweets out there. I should, maybe I should start using Twitter again. I don't know. I don't know.

Will: [01:05:17] let's not get crazy

Brad: [01:05:18] I don't know, man.

Will: [01:05:19] Uh, you know what? I, I, I have found the secret to know bummer Twitter.

Brad: [01:05:25] Yeah.

Will: [01:05:26] Well, it's not, no, but it's mostly no bummer. Twitter.

Brad: [01:05:30] Speaking of hedging. 

Will: [01:05:31] Turn off retweets

Brad: [01:05:32] Okay.

Will: [01:05:32] and you do that by just going to twitter.com and typing in the Milt mute filters area. You do a keyword mute for RT colon, and then they go away forever.

Brad: [01:05:42] I was going to say, I was surprised that they built that functionality in, but it sounds like they didn't, they didn't, but you figured it

Will: [01:05:47] I told a Twitter engineer that you could do this the other day by accident and, or an ex Twitter engineer that you could do this. And I might've fucked us, but I apologize if I did, um, And then also, if you filter out HTTPS colon slash slash you'll get rid of almost all the links. I think images still come through,

Brad: [01:06:07] it's nothing.

Will: [01:06:07] nobody promoting bullshit, which I realize is a little bit hypocritical on my part, but whatever,

Brad: [01:06:11] Don't say that.

Will: [01:06:13] yeah, well, cause the thing is, people see something that they like, Oh man, this is important, but I don't want to talk about this.

It's a huge bummer on smashed the retweet button. I've done my job. And then it's just, everybody's just retreating shit that bumps them out. So, uh, yeah. Anyway, Twitter is bad, uh, but the tech pod is good. So thank you all for supporting us. We appreciate it.

Brad: [01:06:34] we are a better Twitter.

Will: [01:06:35] Yes. Well, and, but the thing is the discord is the thing.

Like I get out of the discord now, what I used to get out of Twitter when Twitter was good in like 2008 and it was just me goofing with my friends

Brad: [01:06:47] And in 2008 I was getting out of Twitter. What I previously got out of IRC. So like, whatever, this is a whole con this is a whole episode topic, but I think it's, I think the internet is in undertaking a process of wrapping back around to smaller more enclosed communities. Again, perhaps.

Will: [01:07:03] Hold on. Where did you get out of? What did you get out of it? Did we get, did we replace human contact with IRC in the nineties? Is that what happened?

Brad: [01:07:11] I mean, that's very presumptuous of you to think that I had human contact in the first place.

Will: [01:07:18] Oh, okay.

Brad: [01:07:19] Look there were not a lot of people importing weird Japanese PlayStation games in my little Podunk rural town. Let's

Will: [01:07:26] That's the reason you were the anime editor, I guess, for so long.

Brad: [01:07:29] had to go far a field to find like-mind. Let's say.

Will: [01:07:34] fair. Um, okay, well, we'll see you next week. If you have questions, send them into techpod at content dot town. We're doing questions next week. So please send us some good ones and we will see you all next time. Bye everybody.