Follow on:

From almost out of nowhere, interest in Artificial Intelligence, also referred to as A.I., is suddenly white-hot.


What happened that suddenly made it the darling of media headlines and the principal driver of stock market valuation?

For answers and a layman’s overview of the underlying technology itself, we’re fortunate to be joined by Dylan Patel, tech expert and Chief Analyst at SemiAnalysis.


Dylan Patel 0:00
I personally believe generative AI is somewhere between the invention of the internal combustion engine and the internet. Right? I think the impact on the world is somewhere between the two.

Adam Taggart 0:15
Welcome to Wealthion and Wealthion founder Adam Taggart, from almost out of nowhere interest in artificial intelligence, also referred to as AI is suddenly white hot. Why? What happened that suddenly made it the darling of media headlines and principal driver of stock market valuation? For answers in a layman’s overview of the underlying technology itself, we’re fortunate to be joined by Dylan Patel, tech expert and chief analyst at semi analysis. Dylan, thanks so much for joining us today.

Dylan Patel 0:47
Thank you for having me.

Adam Taggart 0:49
Hey, it’s a real pleasure. So appreciate you coming on the channel on short notice, you were recommended to me by Peter boockvar, who saw you present at a private event and said to me, Adam, if your audience wants to learn about AI, this is the guy to talk to. So again, really glad you can make yourself available here. All right, this bit of a tall order for you, but can you just give a quick summary of what AI is? How does it differ from other types of computing?

Dylan Patel 1:18
Sure. So AI is you can think of it as almost brute forcing it, right. So when you think about, you know, classical, you know, standard computing, it’s, it’s rules based, right? It’s, it’s, Hey, If this happens, do this, they multiply these numbers together, out and together, what’s the result? Divide them, okay, there’s the number that I was looking for, there’s my margin, or there, it’s my, you know, whatever number I’m looking for. But is is brute forcing it and an unimaginable way. Right? So when you think about vision, you know, computer vision, right? You know, robots being able to cars being able to drive themselves or, or, you know, being able to detect, hey, this is a bad skill, when I’m manufacturing a ton of skills. That is, that is one level of like brute forcing. But when you think about what these language models, which is whatever everyone’s excited about, right? These generative AI language models, these are brute forcing it to an unimaginable sense. So, you know, I kind of want you to imagine, you know, a piece of paper, right? A piece of paper with just a bunch of numbers on it. 50 different numbers on there, right? And I want you to think about, okay, what are these 50 different numbers are there, you can think of them as a neuron in your brain, right? Or a neuron on a piece of paper, right? 50 of them. Now, I want you to stack them all the way from New York City to Chicago. But that is the scale that you have to go through all of those numbers and multiply by all of them, right in the most simple terms, to get to the scale of the model birchbark, which was released in 2018. Right, now, you stack these numbers all the way to the moon. That is the scale of GPT three, which was released in 2020. Now, if we talked about GPT, four, right, the model that everyone is losing their minds about right on chat GPT, that would go to the moon and back 22 times these pieces of paper if you just laid them out, right. And that’s how much you have to effectively each of those numbers is a parameter or maybe a neuron, right? You have to multiply that many times multiply, add, you know, various forms of math. That’s the scale of what this is doing. And it’s doing this every single time it generates four letters, right? So the scale of what’s going on is unimaginable in terms of this is just brute forcing, right? You know, we don’t it doesn’t we don’t know how it works, because it’s just brute forcing it right? And you know, I, you know, there’s there’s so many more technical details that kind of glossing over but that is the scale of the AI that we’re talking about why we don’t understand how it works, really. Besides that it’s a crapload of math.

Adam Taggart 3:46
Interesting. Okay, so let me just make sure I and everyone watching here gets what you mean by brute force. So let me give an example. And you tell me if this is an apt example or not, let’s say there is a password, right? And we want to use computational power to figure out what that password is. Brute force is basically just taking guesses this at that password. And if we had one computer, and if there were trillions of iterations of that password, and we could do it iteration a second, it could literally take us trillions and trillions of seconds to guess that password using that single computer. But with a brute force approach, let’s say we were able to make trillions of guesses a second, because we had the means to do so. We could probably crack that password really quickly, just because we’re able to do so many attempts all at once. Is that sort of what you mean by brute force?

Dylan Patel 4:42
Yeah. But it’s sort of the reverse order in terms of, hey, we have all of this data, which is basically effectively the internet, right, Wikipedia, Reddit, Twitter, you know, all of these various books. Let’s feed them through this model, which is effectively just a ton a ton and ton of data. numbers. And every time we feed a unit of data in, which is basically a word, right? Every time we feed a word, and we feed in all the preceding words, and the next word, right, and we say, hey, modify all of these numbers, which are your parameters, and you have trillions of them, right? In the scale of GPT. Four, right? So if you lay these numbers out, you know, 50 pieces of 50 numbers per piece of paper, all the way to the moon and back 22 times, right, so we talked about this is that you know that many numbers, I want you to update all of those numbers, so that your answer approximates what the next word is. And you iterate through this over and over and over as you train this AI. And eventually, you kind of encoded all of the words that you you know, your training dataset, which is Wikipedia, and all these books, and Reddit, and Twitter, and YouTube, and all of these sorts of pieces of media are encoded into the model through brute force, right? And why don’t we understand how it works? Or why does it make up things all the time is because all of this data is encoded into trillions of parameters, right? And these parameters are effectively numbers on a piece of paper, if you want to think about it. And so then every single time we feed a unit of data, and all of those numbers are modified slightly, right? So why do we not understand is because no one human can comprehend what all these numbers mean? Or, you know, you can’t you can’t go order by order of these operations. Because you simply can’t understand that, right? There’s no time and a human’s life to go through that number. So you’re brute forcing it by feeding all this data in. And then when you say, hey, let’s put a prompt in to that model. And then it gets something out, like, hey, you know, write a poem about XYZ, right, and it spits out a poem. If you don’t, we don’t know how it generated it, because all we’ve done is feed it a ton of data. And it’s multiple, and it’s multiplying and adding, and by a metric load of numbers, and it comes out with this, you know, four letter response, and then you go back through, and you keep doing that. And that’s how it’s kind of a brute force approach. Whereas, you know, more standard computing, we actually are building rules. And right, if this happens, do that, add these numbers together. Because you know, this is the ledger in my account, right? These are more simplistic things that are kind of programmatic, right? Whereas an AI model is just brute forcing it data. And then you’ve built this model. And now let’s see what the model outputs when you input things in. Okay,

Adam Taggart 7:16
you’re raising a couple of key points I want to dive into, I’m trying to figure out which one to go to first. I do understand the difference between the different types of AI. Right, so there’s generative AI, which I believe is what you’re talking about right now, right? Where you basically say create something for me generate something for me, right, like a poem. There’s predictive AI, I believe there might be other forms. I’ll let you explain those in a second. But But before we do that, let me ask a much higher level question. Because it’s brute force. Does AI understand what it’s doing? Like? Is there actual intelligence there? Or is it just a highly highly honed? Output? Like an equation? Right? We just we just brute force things through there? And yes, it came up with an eerily Prussian poem. But does AI really understand how to create an early impression poem? Or is it just like a trained dog where we’ve just trained it to do that?

Dylan Patel 8:14
It has no idea what it’s doing right? All it’s programmed to do is either generate the next port, we have in terms of a large language model, generate the next four legs. That’s, that’s all it’s trained to do, right? You input something, it runs through all of the math, and it just outputs the next four letters, it’s just predicting what would be the next four letters in the sequence that you input it into me, that’s

Adam Taggart 8:34
sorry, let me interrupt you just cuz you mentioned this a couple of times, you say the next four letters is that literally all chat DPT does is go by for letters at a time almost like a genetic code. If you kind of

Dylan Patel 8:45
record your screen, or you look very closely, it’s actually outputting, a token at a time. And a token is effectively four letters. It could also be less, it can be a little more, but in general, it’s about four letters, right? And so when you when you look at chat, GPG or any other large language model, generally, you’re looking at it, and it’s generating a little bit at a time a little bit at a time. Because what’s what it’s doing is you gave it a prompt, right? Write me a poem about dogs and cats, right? And then, and then it predicts, hey, I’ve got this input, you know, and all of my data that was true, was trained into this model. It’s brute force. What’s the next word? Well, obviously, the next word that I would predict that’s most likely is dollar, right? And then I append that to the prompt, right? And this is what checks you because she does, it takes that letter, you know, say dog like the next word, that puts it in the beginning that says, Write me a poem about dogs and cats. And then the next word is dog and then it’s like, okay, let’s run this through the model again. Okay? And it says dogs are right, and then you run that through again, in you keep appending whatever, you generate it back to the beginning, and this is all it does check. GPT doesn’t even know what the prompt is versus what it’s generated. Actually, once it starts generating, it has no idea whether what you said is the prompt or what it generated an input back into the beginning, because it’s iterating through over and over and over to generate four letters, or approximately four letters a token, right? And so this is, it has no idea what it’s doing. And so you’ll see like, Hey, why does it have so much emotion? Well, it’s because we’ve encoded all this data about emotion into it, because it read all these books about, you know, Romeo and Juliet, and, you know, Hamlet, and all these other things, right. And we compete in Reddit and Twitter. So it’s learned all these things, but really all it’s doing is predicting the next letter. And

Adam Taggart 10:30
to be clear, it doesn’t understand emotion. It’s just been trained on emotional words, correct.

Dylan Patel 10:39
Exactly has has no motive, and if in and of itself, right, it’s just trained on those emotional words. But it turns out, right, it was trained on both hate self help, and also on, you know, the bully who’s bullying someone else on the internet. And so you can get it to say, you know, all sorts of emotive things, right, positive, negative, hateful, loving, caring, you know, every sort of emotion, because all of that is on the internet, right. And that’s what it was trained on. And so all of these sorts of things are there, it has no comprehension of them. But if you give it the, you know, correct prompt, and it starts generating the next word, it might predict that, hey, this is a scenario, the scenario that I remember the most from my training data, remember, right? It’s just encoded, it is, hey, actually, the next word is some bully writing some, you know, bullying.

Adam Taggart 11:28
Interesting. So I want to ask how AI learns, and I’m going to put it learns in quotation marks, too. So sticking with this example of the emotional poem, I understand that if we feed it a bunch of emotional input, it’s going to likely create a poem that sounds emotional. How does it get good at writing good poems, or better poems? Is there some sort of feedback mechanism? And maybe it’s human driven, that says, okay, that poem sucked, but that one’s good. And it learns, quote, unquote, to write better poetry over time.

Dylan Patel 12:03
So the people who are building the model, right are obviously, you know, there’s, there’s so much human data out there, but we don’t have enough computational horsepower, right, in terms of chips, to actually train it on all of the internet, right? We’re not even at 1% of the internet. So but we do have, you know, people who can say, hey, actually, we should just train it on all of Wikipedia, because Wikipedia is a pretty good source of data, right? I mean, you know, whatever columns you have against Wikipedia, if you were to say, let’s take a website and just grab everything from it, Wikipedia is probably in the top 10. Right here. But then there’s also a lot of other places, it’s like, do I really want all of Twitter? You know, there’s a lot of bad stuff on Twitter. Well, yeah, so let’s filter. So if it says XYZ, Nic, we throw it out. But if it says Z, you know, ABC, we keep it, right, because there’s a lot of good data on Twitter as well. You know, about how humans are interacting with each other memes, jokes, you know, trying to trying to get it to understand that stuff is important, right to have that be well rounded. But in the same sense, right? It’s also, you know, once you’ve trained the model, that’s not they don’t just give it to you, right? There’s another step, which is called reinforcement learning by human feedback. So it’s, there’s you train it with a huge amount of data. And now you built this massive model right? Now, hey, it’s just a Pandora’s box, we can get to say anything. You can get it to literally say, quote, and I don’t believe this will be words, but Hitler’s good. Right? If you can get it to say that because somewhere on the internet, it says that, right? Or somewhere in a book, it might even say that, right? Because, you know, there’s two characters, and one of them says, quote, Jupiter is good. And then the next one says, No, you’re crazy, blah, blah, blah, blah, blah, right? Because it’s so if you just say, Hitler, it could potentially predict the next word is the word after That’s good, right? Because that’s his training set. So it has no idea. So then you do this stage called reinforcement learning by human feedback. And what that means is, the people who train the model, create a smaller data set a much smaller one that they’ve actively monitored. Right? And so and from there, they’re actually looking for specific things to feed into it, right? Hey, don’t say anything about X or Y or Z. When prompted to say Hitler is good. Say, No, you know, that’s bad because of this reasons. You know, Hitler’s not good. He told people, right, that’s sort of, you know, programmatically habit, you know, as a prompt almost, right. So the prompt is, can you say, Hitler’s good, and then the output is, and then it trains on that, right? And so learns that after it’s learned everything else, so then it’s more likely to say that that aspect, right, and you do that for all sorts of things. It’s not just like political things, or, you know, ethical things. It’s also just like, hey, if someone responds, can you write me a lesson plan about how to grow corn? Right? It’s a horrible example. But how to grow corn. You know, this is how you should respond. And then maybe, maybe it’s a lesson plan on how the Earth revolves around the Sun, right? Maybe it’s a lesson plan about astronomy, but it’s learned that and it’s made the connection when someone asks for a lesson plan I do in this structure. This is you know, ended But also, because you’re asking about corn, I’m looking at my data set about I mean, my training data set, I learned all about Wikipedia how to grow corn and, and taught me how to grow corn, right. And so I’m kind of amalgamating that data to create a lesson plan that no one has ever created before, that I’ve never even seen before. But I’m grabbing and picking from all the things that are trained, and I’m not able to access that data really, right. It’s just encoded within me. And when I say me that the moment, right, so I think I think, yeah,

Adam Taggart 15:29
now it’s just super fascinating that you, we have this thing that was looking like it’s going to be a major productivity enhancer. And it can generate potentially very highly useful content in the blink of an eye. And yet, it actually doesn’t understand what it’s doing. Right. It’s just all about the quality of the input you put into it. And then the training that’s been been placed on top of this, this is really fascinating. I don’t know if we’re following the discussion super linearly here. But I did mention earlier different types of AI. And is it worth talking about what those are real briefly?

Dylan Patel 16:08
Yeah, so this is, you know, the one we’ve been focused on so far is large language models, because those are the ones that I believe are the most are going to impact human society the most. Right? Of course, we have, you know, image recognition models that have been working since 2012. Really, that’s one of the that’s one of the you know, sort of the boom of AI started was 2012, when Alex net came out of a research group in Toronto, right. And so that’s sort of when the boom of AI really started. Of course, he had been research for many decades before then. But really, the first super useful use case was just, hey, here’s an image, can you output some text, which is recognizing it, right? And how was it trade? It was, hey, here’s a bunch of images. And each of them has some text associated to it. And then it learned the representations, right, these models

Adam Taggart 16:59
are starting to wrap it is that like, this is what a cat looks like.

Dylan Patel 17:03
Yeah, exactly. It’s like, Hey, here’s a picture of a cat. And then the label for his cat. Here’s a picture of a dog label for his dog. Here’s a picture of a gorilla gorilla. Here’s a little for gorilla. And basically, what we’ve done has been that full kind of model, you know, image and image recognition models, which have been built and built and built and built and built, and are slowly getting to the point where they can drive cars on the roads, right? Or at least they could recognize everything in the scene and say, This is a road, this is a car, hey, I shouldn’t drive there, because there’s a car there, right? That sort of stuff. And it’s not really a generative image or a generative model, right. It’s only recognizing what’s going on. It’s understanding what’s going on, it’s perceiving what’s going on. But it’s not, it’s not generating new things. Right. And so this sort of model is completely different, right? And there’s ones for voice recognition, right? I mean, we’ve been using voice to text for quite some time now. You know, on our phones, it’s tremendously helpful, especially for folks who aren’t good at typing with their, with their hands. Right. So you know, that’s an a quote, that’s more similar, right? You know, recognize the voice, turn it into text, that’s not generating anything, right. So there’s a different kind of these are different kinds of models, right? So what those those have been around, those continue to improve, and we’ve kind of recognized implications, right? Hey, one day cars are going to drive themselves. And, you know, I think like two to 3 million Americans drive for a living, right? Well, what’s gonna happen to them? Well, most likely, it’s going to slowly drop, right, maybe, maybe the car can’t drive all the way or the truck can’t drive all the way from Walmart’s depot here to the Walmart store. But maybe, you know, a driver can take it out of the depot, put it on near right next to the road, get out and go back and take another one or two there. And then from right on the side of the road can drive all the way to the right next to where the other distribution center is, and then you know, another human camp, right. So we’ll get along the way, we’ll get to the point where cars can drive themselves and trucks can drive themselves, but over time that two to 3 million people will decline in terms of how many people are driving for a living, right. But we’d recognize those implications for many years now. The sort of sort of the, the economy has recognized that the markets recognize that’s going to happen over time when there’s a big question, of course, you know, is it two years from now? Is it 10 years from now, but the generative models, right? So hitting generate an image, or, Hey, generate the next word, in this in this in this large language model? Those are the ones that are really, really what everyone’s going crazy about, because no one’s ever seen this before. Right? I mean, we’ve had since effectively 2018, but they were nowhere near as good as sort of aha moment when Chet GPT came out in 2020. November 2022.

Adam Taggart 19:36
Okay, and it sounds like the capacity, or the computational power is increasing exponentially, maybe even hyper exponentially if that’s a word. You know, we went from New York to Chicago to the moon to NAV Moon back 22 times. Presumably the next jumps going to be even more mind boggling and impossible for the human brain to understand what Is the pace of these cycle upgrades like like, when does the next iteration of of chat GPT likely to hit? And what would that scale be? Would that be back to the moon? 2200 times or?

Dylan Patel 20:14
Yeah, it’s a big question. Because we’ve sort of, you know, we’ve gotten to the point where the, you know, these large language models, especially, you know, GPT, four, which was trained in, you know, 2022. But before that GPT was released, they started training it, that machine that they trained it on, cost, somewhere in the neighborhood of 700 million to $500 million, right to build that supercomputer. In terms of, you know, hey, was this worth it? The company that trained to open AI? Was not sure, you know, they, I mean, they were they were building it, because they weren’t sure, but everyone else in the world would have been like, what are you doing? You’re building this big of a supercomputer to train this model? And we have no idea if there’s going to be an economic use for right. You know, I bet you 99% of people would have been like, this makes no sense. Why are you doing this? Now? I’ve done a whole lot more people are like, yes, this makes sense. Let’s throw away more money at opening up. And so you see, you know, Microsoft signing a deal with open AI for what 10 billion plus dollars. You know, so maybe maybe it’s not a $700 million, or $500 million model, or machine that we’re building. Maybe it’s maybe it’s a 10s of billions of dollars and machine and open AI is renting it from Microsoft for $10 million.

Adam Taggart 21:25
Right? And that’s a small investment for Microsoft and multi trillion dollar company to make I mean, it’s not it’s not inconceivable, right. It’s it’s a pretty doable investment for them if they decide to do it.

Dylan Patel 21:37
And they have absolutely right. I mean, if you look at any any tech CEO, they’ll say, Yeah, this is somewhere where I want to invest billions, 10s of billions. I bet, yes. If you sat down such an Adela and total pay to build GPT five is going to take $100 trillion or 100 billion dollars, who’d say yes, let’s do it. Because the value of what could be made, you know, we’ve had order of magnitude increases, actually. So so the adage has been the model size, right? So the size of that model, right? You remember I talked about from Chicago, from Chicago to New York. Right? If I were to double that, then that gets me to, you know, Los Angeles, maybe right? That was that was happening every three months for a period of time. But then we started to plateau out because Oh, my God, $500 million $700 million? What is that? Is that worth it? Because we’re not sure right? If we’re what we’re building is worth it. Right? Now everyone in the world is convinced that it’s worth it. And so you’ve seen so much money for it, not only is Microsoft investing in open AI, you know, Google was always investing but more quietly on their own on this. But you know, so many other companies Metis completely change their mind about, you know, not change their mind. But they’ve invested a whole lot more into GPUs. In fact, they’re, they’re buying not not 500 million to $700 million. With the GPUs, I believe they’re buying somewhere in the neighborhood of five to $8 billion, or GPUs this year. Right. But chips or chips to train this model, they’re buying five to $8 billion worth this year, and next year will be born. Right? And you look across the world. You know, it’s many, many companies through these VCs are pouring money into startups to do this. You know, there’s, there’s enterprises who maybe they’re maybe they’re not going for TPP, four or five, right? But they’re going for, hey, can we bring GPT? Three, but for my specific use case, right? Hey, I’m Coca Cola, I have 50 years of PDFs and emails, and no one person at the company understands every process. Why don’t we teach it everything from all of our emails and PDFs and Word documents. And now, if anyone wants to understand the process, they can go ask our bot, and the bot might be incorrect. Sometimes, it’s going to be a whole lot better than chasing around the company of over 100,000 people to find the right person to talk to about the right process to implement something. Right. So there’s companies that are doing this to enterprises, and it doesn’t need to be the biggest model in the world, it could be a much smaller model, but on my data, right, because this this technology is unlock, you know, sort of catch up made everyone’s eyes open up onto what’s the possibility, not only for this crazy, crazy model, that’s huge, super huge. But also for a much smaller model, right when these is doing this, and they want to replace drive thru ordering with this, right? Attach a voice synthesis bot to a, you know, sort of their own chat GPT kind of model. And now we just take orders with, right? And are they going to do this, they’re going to record it. I mean, they’ve already been doing this, they’re going to record every single interaction in the drive thru and figure out what the correct order was based on the interaction and then train the model, right, that’s what they’re doing. So so, you know, there’s so many applications for this across the world, you know, from small to big, right? It doesn’t have to be crazy ones, like, you know, oh my god replace humans for so many things. Right? could be as simple as drive thru order. We’re assisting people with things you know, Google is investing hugely into the medical field, right? Hey, doctors are great, but they’re, they’re horrible at explaining things.

Adam Taggart 24:45
That’s what I was gonna say yeah, or even just the diagnostic process, right, which is just your feed and a ton of medical data. If we see symptom x, then x percent, you know, is likely to be the condition right and GPB can handle a ton of information. So if it’s a multi factorial diagnosis, they can probably do it a lot better than most humans, once they’re, they’re up and trained.

Dylan Patel 25:05
So Google has this model called midtone. Med pom pom is their is their sort of GPT model. And med POM is specifically tuned for the medical field. Turns out, right, like if my, if my dad or my grandfather goes to the doctor, and they talk about their symptoms, you know, you know, doctors like is it a sharp pain? Is it adult pain is a numbing pain, it’s like I don’t want to know. Right? You know, in terms of what it’s like, hey, what if what if we taught it? In this scenario, when the blood levels, I’ll say this about my cholesterol, and it’s about my sugar, and this and this, and this, and the human being is saying this, it means this, and you can do this across millions of people’s data instead of the doctor who’s only seeing, you know, amazing as they are, and they’re definitely still needed. You know, how many patients have they seen in their lifetime? How many times somebody complained about this? Exactly. Right. And they’re gonna, they’re gonna, they’re gonna be, you know, with assistance, you know, they’ll be able to be guided in the right direction, oh, this actually means, you know, oh, it’s just, you know, you just need some time, while you’re fine, you’re complaining or, Oh, wow, you might need to get you an MRI because there might be like, a tumor or something, right? Or, Hey, your hormones are all messed up. Because your gallbladder I don’t know anything about medicine, but you know, just just, there’s, there’s all sorts of symptoms that patients are imperfect. And so being able to hone in on what those patients those symptoms that people don’t understand are, and being able to actually still get to what’s the problem isn’t tremendously difficult task. But if we feed millions and millions of patients data and interactions with doctors made, it can be better than doctors at this,

Adam Taggart 26:37
right? Because it may find correlations that are just not intuitive to average people and look not not trying to say, look, chat, TPT is going to replace doctors. But imagine how much more advantage the doctor is walking in to see a patient if he’s just gotten a readout from chat GPT that says, Okay, I think these are the probabilities of what this guy has 87% chance, this is the problem, you know, 30% chance, it’s this 10% chances, and the doctor then knows how to prioritize what he’s looking at. Right? They’re so fascinating. And that’s just again, one example. So one of the questions I had here for you was was how transformative is this going to be to commerce and society? I think we’ve already talked about a couple of examples, it sounds like you said, hey, it’s going to be pretty big from the spectrum of the relatively small and straightforward, like drive thru orders to the other end, which is maybe like, you know, completely up leveling health care in the world. Are there any other like major applications that we haven’t talked about yet that are worth just putting on people’s radars?

Dylan Patel 27:34
I think one of the industries it’s being disrupted the most is actually programming, right? Programming, you know, you know, all the sort of the, the, you know, hey, do this, if that happens, hey, when this data gets here, do this, oh, this, this, this shows up in the data, we need to set an alert out, all this sort of stuff is tremendously accelerated by, by these generative, large language models, right? Why? Because programming is literally just language, right? It’s language in a specific format that computers can understand. And there’s many different languages and there’s so much code out there. So this is one of the industries that’s being revolutionized the most, one because people are adopting it faster, of course, in that industry versus other industries, but to because there is a lot of work that happens or if there is definitely a lot of like, very important work, but every job has menial stuff, right? And programming is no different. Right? I’ve programmed a lot in my life. And, you know, I there’s definitely stuff that’s menial on there, like, and so that’s one of the areas that’s being revolutionized, like, kind of, kind of by by these applications, right. You know, hey, document drafting, hey, there’s there’s legal cases where lawyers have used GPT too much. Right. And they’ve been caught, you know, with fake legal cases, you know, so it’s like, there’s, there’s, you know, the sky’s the limit in terms of what could actually be done with these models, in my opinion, you know, where people are going with them, and what applications people are building, you could think about any any sort of place in the world, obviously, you know, the trades plumbers, I highly doubt a plumber is ever going to be, you know, assisted by a large language model maybe. But maybe they’re, you know, their calls are right, you know, call plumber, hey, scheduled for me, all of that can be handled by model, because the model Caesars schedule, that knows exactly where everyone is, you know, where you are, where the house is, where the plumber is, you know, he said, Hey, my wife, and I have our anniversary dinner. So we’re definitely not doing it today. Right. That’s good. Maybe you can schedule for you. Right, but but definitely not replace some jobs, but other jobs. You know, the way I like to think of it is the title, you know, jobs getting replaced, because, yeah, sure, the plow and plow and tractor and all those things, make our economy go from 90% of people with farming to less than 1%. Or at least in the US, and that’s happening across the world as well. You know, more slowly in other places, but in the US, over 90% of people were farming at one point, right or in the in the in the food growing industry. And now it’s less than 1% Well, you know, what, what does ai do to the world? You know, no clue. General today, no clue, right? But it’s certainly not going to just like collapse the economy, if anything, it’s going to make more abundance, more wealth. It’s about what’s going to happen structurally in society. Right as as, as certain jobs get deprioritize and other jobs, you know, a boom out of nowhere.

Adam Taggart 30:17
Alright, let me let me dig into that. Real quick. Before I do, I just want to ask a question on one of the examples you mentioned. In the case of using AI to write software, totally get it, right, they just had, it’s just words, just in a different format. And I can see how that can be super helpful where you could tell, you know, AI, all right, you know, write me some software that does x, right. But software code can get big over time. And let’s say it gets big, and there’s an error in it. Right? And so obviously, you get a bunch of code, and it’s not doing what it’s supposed to be doing. Can AI actually do the debugging? Or do you actually need a live person to crawl in there and figure out why it’s not working?

Dylan Patel 31:00
So one of the, you know, things are misnomers is that AI is gonna replace the whole process, right? In programming. No, it’s not gonna replace the whole prom process. But it’s more like, Hey, I have this function, I need to write this function might only be, you know, a dozen or two dozen lines, right? When this data gets inputted, you know, output it in this way in this format, and all these sorts of things, right? Hey, there’s this function. And the API can definitely help you write those functions. Or you can feed it in and say, Hey, this function is supposed to do this, this, this, that’s this, but instead, it’s doing this, why what’s wrong with you? Here’s, here’s a bunch of examples of inputs and outputs, these are wrong, why which ones are wrong? You can ask it these sorts of questions, and it can help you, you know, fix it, it’s not going to say, Hey, can you write all of Excel for me, you know it over the next week, write the entire application of Excel, because it’s just ginormous, I have no idea how big Excel is, but it’s probably massive, right? But instead, it could be like, Hey, we’re thinking about adding a new formula for this, this this, you know, then it can help you. And so really, the job becomes, can I supervise the AI as it does these functions? Can I think about the big picture putting these functions together? And how do I do that efficiently? And and without introducing a lot of bugs? How do I prompt AI to go look at certain things, and come back to me with Hey, this is wrong, this is right, that more becomes the role of an architect or a program, right? And so that’s maybe what’s what’s going on or changing in the industry. And it’ll be a steady process of change, right? And maybe one day, it can start writing, you know, you know, more simple application, say, write a website for me like this, you know, that’s maybe something that’s more simple than, you know, a big, big application, like, what is my factory? Inventory Management System? Right? But can help me start with certain functions over time. Right. And so that’s, that’s really where I think the AI comes in. And the human is the overseer, right? Because this model doesn’t take right. Again, it doesn’t think it just what is the next? What is the next word? What is the next four letters that I need to put in? Hey, do I need to put a bracket here and this function? Okay, yeah, because obviously, this function looks like it’s ending. But it could also be trained on a ton of code, that could could have bugs in it too. Right? So it might also be wrong, right? Just like we were talking about, hey, the model could say that there is, is a, you know, XYZ person, it could also say, you know, there’s gonna be a bracket here, oh, there should be a comma here. But that’s incorrect. Right. And so the person needs to be there driving the AI, of course, should. And so that’s sort of the role in programming. And it’ll, it’ll be a continuum from, you know, the beginning, very simple to, you know, as we go forward, as years go on, it becomes more and more powerful. And architects is, you know, maybe not worrying about small details here and there, but they still understand, they still need to be able to understand what actually happened here. They’re there to they need to build that.

Adam Taggart 33:47
Guy, so just so super interesting. Alright, so, um, I do want to get to the point of you touched on, which is the, you know, the impact that this could have on the labor market. Right. And there, there are concerns that, oh, my gosh, you know, AI is just going to replace big chunks of the workforce. Right. And I think what I hear you saying is, yeah, probably, but that’s not necessarily a bad thing, right? You know, we can find it’s going to create additional jobs, and we can maybe put that labor to other productive use, right. And I just want to flag this concept, which is an old concept in economics, called technological displacement. And it was to my knowledge, actually, the proponent of it was was Maynard Keynes, where he said, Hey, if technology can replace human labor more efficiently, and at better cost, yeah, you should absolutely do it. But he said, You got to be careful about the pace of the displacement. He said, If you displace too many people too quickly, where you can’t redirect them in his swift period of time to additional productive work. You can end up creating a social cost that is higher than the cost savings. of the new technology, at least in the interim period, right, which can be measured in years or decades sometimes. So I’m just curious. Are you not worried at all about the impact on on labor from Ai? Or, you know, should we be looking at this as sort of a measured deployment so that to use your example, we don’t put 90% of the truckers out of work, you know, in a year? Because we can’t, right? We might say, yeah, if we really invested, we probably could, but maybe we want to make this a 10 year glide path, so that, you know, these guys with training, so that we can help these guys deploy into something else, besides just giving them all pink slips?

Dylan Patel 35:41
Yeah, I mean, I’m certainly worried about that, of course, you know, you know, what happens is people’s jobs get replaced, right? Or get displaced. You know, one perfect example is the US auto industry, right? The US auto industry is fallible, produce the same number of cars, similar, you know, 10 to 12 million cars every year, since the 50s. It’s bounced around for four sessions, it goes down. Now, there have been times when it’s gone above, somewhere in those in that range, right. And it might not have the exact number, but it’s produced about the same number of cars since the 50s. But the number of people working in that industry has tanked. Right. And this technological displacement has done really bad things for Detroit. Right, which is Carson, you know, as far as I know, and, you know, and many of the reasons regions around there, right, it’s the rust belt for a reason, there, there’s all sorts of you know, byproducts from you know, like, the opioid crisis is the most strong and parts of the Rust Belt, right, which have been displaced by a lot of, you know, not just not just the automobile industry, but many manufacturing sectors. If you look at the US manufacturing output, it is actually only gone up, right, of course, recessions, it goes down, but, you know, 70s, versus now the US outputs more manufactured goods, right? That is as a full fact. Right? Of course, the US consumption of manufactured goods has gone up more than output. But, you know, that’s, that’s a whole different topic. But the number of people working in manufacturing, that’s gone now. Right, and those jobs may be, you know, one person could support, you know, a family of four, what have you and, and the ones that got replaced with in terms of retail, maybe not, but then there were all these other jobs that pay way better, right, a software engineer at Google can actually see, you know, a family of 50 or 20, maybe, right, 20, they only have 20, right, a software engineering in the Bay Area at Google. But you know, that’s obviously the money is going somewhere else than the family that got displaced. Right. So that is obviously an issue. And but it’s not, you know, my route to sort of say, how do we fix this, right? Or how do we how do we, you know, stop this, you know, what, I do think it is a bit dangerous to think, Hey, we should ban it, right? Because then you’re saying, you know, this abundance that could be brought about right, which is, you know, more men, more goods, and more services with less work, and less people doing that work? is obviously a good thing. Right? You know, we, the US produces more food than ever with less than 1% of people partner. But you know, that path to where did those farmers go? And, oh, did they have to move to the city and now they work in this really horrible condition factory right now, that is obviously difficult for society to process or even in the case of, you know, the rust belt with opioid crisis and any number of other issues. The US is richer, but not not exactly the same distribution. So, you know, I’m not I’m not a political science. Firstly, I’m not really a political person at all, but I definitely see the issues of this. But I also recognize that you know, it is overall going to be a positive force, in terms of at least the world will have more abundance. Yeah. Okay. Look, I

Adam Taggart 38:32
know you’re a technologist, you know, you’re not a political policymaker, or anything like that. But it does sound like what you’re saying is, this is a double edged sword, we should make sure that we take a planful approach to it, and no is a fine answer to this question. But I’m just curious. If Congress said, Hey, Dylan, come help us figure this out. Are there any particular policies in your mind that you would already recommend for this, to mitigate what we’re talking about here? Or you haven’t just thought about it all that much?

Dylan Patel 39:04
I mean, if I do put on my political hat on, right, I think in general, the economy is getting more and more geared towards capital investment, rather than labor investment, right? capital goods create more stuff than labor goods, right. So so you know, a factory that is highly automated, you know, may only have a few 100 people there, but it creates increases the total economic output of the US far more. And so you know, in general, I wouldn’t say we should be, you know, doing sort of, you know, tax credits maybe or deductions for equipment. And so, you know, that’s one of the reasons the chips act is kind of great, because it provides a tax credit for equipment, right, and the semiconductor fabs, right. The you know, the the ones that you know, TSMC is building in in Arizona and Intel building in Arizona and Ohio and Samsung’s building in Texas. These produce 10s of billions of dollars of output with total employment force of you know, 10,000 5000 people in that fat in that factory, right? fabrication plant, it’s it’s tremendous output for very few people. So obviously, you know, we need to want to encourage those sorts of capital goods to be built in the US, right? These highly automated factories, whether it’s highly automated factories, building chips, or highly automated factory building robots or what have you, right? We should be investing in that. And then somehow figuring out how to tax that appropriately to you know, help people who are being displaced, and help people who are being pushed out of the labor market and help people transition from their old job to the new job, right. That’s generally what I would say, right? Not just, you know, handout money to everyone willy nilly. But that’s just my general personal belief. I have not, you know, someone who really studied economics in such detail.

Adam Taggart 40:43
All right, well, I appreciate you taking your swing at it. And I do want to note for folks here that I’ve asked you to come on here and help demystify AI for us. But it’s the semiconductor industry, that’s like really the bullseye of your expertise. If I get time here, I might squeeze in one or two questions. But I already know, there’s not going to be enough time left to do that discussion, justice, DOE. And so if you’re up for it, I’d love to have you come back on again, at some point in the future, and really dive deep into semiconductors, because that’s a huge part of what’s driving the global economy here. And it’s incredibly strategic in general, but particularly now as we have this whole re shifting of global alliances going on. And now reshoring of manufacturing in many cases of semiconductors. So, so anyways, I might try to squeeze one or two questions. And like I said, but I just know I’m not, I’m not going to try to unfairly shortchange you here, the door is open for you to come back on. Alright, so back to AI for a moment. And actually, in your point, sorry, before I leave semis for a second, maybe there’s some it’s not a silver bullet. But maybe there’s some way to take some of the jobs that are maybe getting displaced by AI, and yet guide some of that displaced workforce into this reshoring of some of the new manufacturing that’s coming in, that could be part of the win. Alright, so for folks that have been listening to this channel and saying, or listen to this discussion and saying, Wow, this is really cool. This does seem transformative, I should probably know more about AI. Because I think it’s probably hard pressed for a viewer here to be working in an industry that’s not going to be touched in some way, shape or form by this revolution. I imagine that you would agree with him to say, Yeah, you should learn more about it, you should probably get a little bit of practical experience with it. What are some of the ways people can do that right now? Or do you have any recommendations, if somebody just wants to get a little bit smarter about AI? These are regular people, we’re not talking about coders, but they just want to get a little bit of, you know, practical, you know, exposure to what we’re talking about here. Where can they go? I mean, there are there are like websites where they can play around with jet chat. GPT for free, right?

Dylan Patel 42:55
Yes. You know, there’s, there’s two ways there’s one is, hey, go online and find a website that where someone’s teaching about AI, and you know, what have you and that’s absolutely a great method or YouTube videos or what have you. But I think the thing has gotten, you know, especially especially folks that are, you know, maybe less technologically inclined, you know, for example, my brother, right, he’s, he’s in the medical field, but he’s not a computer guy, right? How am I gotten him to, you know, think, Oh, my God, AI is actually going to change the medical field because he was very dismissive. People been telling them, hey, you know, AIs are gonna be able to read radiology, you know, scans or what have you, right, and MRIs and this and that for years and years. And it’s constantly been like, No, not really, you know, what’s what’s gotten him to really wake up to the awarenesses? Hey, go on there and start asking questions. Hey, go out there and ask it to make a lesson plan to teach you about something. And it doesn’t need to be about AI. In fact, I welcome you to not do it about you. What about a topic that you’re passionate about? Hey, if you’re passionate about jigsaw puzzles, ask it to teach you about the manufacturing process of jigsaw puzzles. Right? Or the best strategies for solving jigsaw puzzles can go just keep asking questions and answers. And then hey, can you dive a little bit more into that? Speak to it, like it’s a professor, tell it it’s a professor who is a professor and an economics and I’m trying to learn about monetary policy of Keynesian economics, right? That’s something I barely know about. But I’m just saying like, you know, it’s probably something you know, a lot more about Adam, right? And ask it to do that. Right. And ask it Hey, but what is it is it you know, printing money in the recession going to drive huge inflation? Well, that’s what’s happening right now. Right? I mean, you know, ask it these sorts of questions, and, and it hadn’t talked through and it’s like, none of an argument. Right? Aren’t you like No, no, I think it does this. It’s like, no, no, no, it does this IT spend the time to just ask it about a topic you know a lot about because that’s where it’s really interesting is when you ask it about topic, you know about a lot about you can see the limitations today, but you can also see, wow, this is powerful. Wow, this could you know, not necessarily disrupt me, maybe because I’m still an expert in my field, but it can teach me about these things. Ask it. How do I change a tire the whole process That’s right. Asking about the whole process. You know, you know, it’s tremendous. Actually, you know, my my cousin who is, you know, younger, she had her first flat tire. She called her dad, her dad didn’t pick up, she called me. I talked her a little bit, but I actually pulled up, catch up and asked it because I had to visualize, right? Well, what I’ve done five years, pull up YouTube, right? Because I can explain it but like, hey, how do I actually visualize it and explain it step by step by step because some things just seem intuitive to someone who’s done it before. Right? So whatever it is, you know, ask it right and play around with, it’s not going to be perfect. But it’s really going to be amazing, right? So the way I think of it is kept GPT the free version is like an army of 16 year olds who have been set loose on the internet, and you ask them to do a research project, great. 16 year olds are still gonna make mistakes, but they’re great, right? 24, which is the one you have to pay for is like an army of college students. Right? But what is the future gonna hold? Right? Because college students are obviously going to be wrong, and they’re gonna have their own biases and what have you, but think of it as an army who, you know, just are responding to you now. Right? And now, you know, a student who’s a biology student who’s in, you know, a, you know, agriculture student who’s in liberal arts, just, you know, all of them in a room together, sending you an answer, right. That’s how I think about it. And so what is the next version is going to be Wow, that’s going to be a whole lot more impressive. So a lawyer and a doctor, you know, in the same room, right? Like an a machine. It’s right. It’s all these people in the room. So it’ll improve over time, but ask it, see the limitations. Learn about something that you already know about and learn about something adjacent to learn about something you’re curious about, hey, I want to get into cocaine. ask you about cocaine, ask it about the types of needles, you don’t know anything about cocaine, asking about the types of needles you use, and the types of yarn, how was yarn spun? You know, ask it what types of animal Oh, synthetic yarn, where’s that made? How’s it made? Versus? And what are the strengths and properties versus, you know, non synthetic, right, from what from machines? I forget, I don’t know anything about this. But, you know, this is the type of stuff you mask it and actually learn about how you know about its use cases, because how it works is important, not everyone needs to know how it works. Honestly, I don’t know how a four cylinder engine really works right on a on a Atkinson cycle engine, I barely know how it works. I watched a YouTube video like three days ago. That’s why I’m bringing that up. But, you know, most people don’t know and don’t need to know. But what you do need to know is how to drive your car, right? Or, you know, hey, if it makes a sound, I should go to a mechanic, right? And so you know, ask it about things you want to learn about. And try and ask arguing, that’s what I would recommend people to do.

Adam Taggart 47:23
Great, great. And just to build them an analogy further, like, that’s where most people’s value add with chat GPT is going to be is learning how to drive it well. Just like, like you, I don’t really know much of the physics of why my car works when I drive it. But I know how to drive the car. Right? That’s, that’s the value that I I contribute to it as the driver. Alright, so when you say take it for a drive, ask it questions. Specifically, where can people go? Because there are a couple of different sort of AI engines out there. Right? I mean, aren’t there? Isn’t Microsoft is different from Google’s?

Dylan Patel 48:00
Yeah, so there’s about a dozen companies who are now in the race sort of for here. And so obviously, open AI, who’s got a huge investment from Microsoft is sort of the most advanced in terms of what you have to for the more advanced version of open AI, which is Microsoft, right? You know, kind of you can think of them as synonymous, synonymous, but they aren’t, is chat the open I believe you can Google it. It’s not the exact one. So that’s chat GPT. But then the vendor, one of that is paid. And I would say the papers was the best in the world right now. But the next best in the world is if you go to a bar, right, and if you Google bar, that is Google’s and Google’s bar, it is free, and it has access to the internet. So you can google it can search stuff as well. And that’s really interesting. Then there’s, there’s Oh, meta has actually made a really impressive one that I would say is the third best in the world called lamp, right? Literally long. And it’s they’ve opened it up, and it’s completely open to the world. And in fact, they’ve done something really interesting with the one that they open to the world. And they didn’t do that snap that I called, you know, much of the stuff that I referred to earlier in the show, reinforcement learning that human feedback. Now, why is that interesting? Because now you have access to sort of the untamed beasts. And in that sense, you also have access to, you know, sort of, hey, you know, the Trent, the people who trained it, obviously put some biases into it, right? Hey, don’t talk about this, you know, no political view, that political view, you know, I’m not a political person, but you can tell that right? Like, there are political views that have also been trained into the AI. So when you’re asking you about how to grow corn, it doesn’t say anything. But when you start asking it about, you know, Trump versus Biden, obviously you’re getting very different results. So and I recommend you don’t ask it about political stuff, ask it about practical things, things that actually are useful to know about discuss in life. Not politics. But you know, there’s there’s meta has released that and there’s, there’s you know, anthropic but I would say the two main ones was to really focus on today would be chat GPT. So your search that are open at open And the other one would be bark. So Google bark I believe in Google, those might not be exact websites. All right, great. During the description

Adam Taggart 50:20
All right. And you said that the chat GPT version, by open AI, Microsoft, that there’s a, there’s a paid version of that, that you said is their best version? How much is that paid version?

Dylan Patel 50:32
It’s $20 a month. And so, you know, I think play around with the free version and play around with Google’s version, you’ll, you know, play around with it as in like, consciously make the effort to sit down and talk to it for a couple hours, right, spend a couple hours, play with it, talk to it. And now, go and try it. And then I would say go and pay for it. That’s what I personally do. So what a lot of people I know personally do, and then play with that one, and you can cancel it out for a month, I think they actually give you a refund window. That’s pretty short. So even if you don’t like it, you can refund it immediately. Then play with

Adam Taggart 51:03
$20 a month to have an infinite number of college students there to answer any question you have it’s pretty good value.

Dylan Patel 51:10
Yeah, the and they have this other method, right? The only way to access AI isn’t cheap, btw. But that’s the user friendly way, they have another way where you can put it into your application with what’s called an API. And the funny thing is the way I looked and what they do is they charge I believe it’s six cents per 750 words, about more or less, six cents for 750 words to be generated. So incredibly cheap when you think about it. So but obviously, the chat, the nice user interface, and all of that cost a little bit more money. So but it’s kind of lost my train of thought, but it’s impressive.

Adam Taggart 51:50
All right, we’ll look at wrapping up here, I’m going to make the call now, I’m not going to get into the semiconductor part of the discussion, because we just don’t have time. And that would not be fair to you. But a couple of key concluding questions one is, so you know, wrapping everything you’ve said together sounds like tremendous opportunity lies ahead, to leverage the benefits of AI. And it’s probably going to unlock the trillions of added value to the economy. And if you disagree with that number, feel free to reduce it or raise it, depending upon your own point of view. What companies are, do you think are best positioned to take advantage of it. So we talked about Microsoft and Google that are, that are building the platform itself? Be interesting to see how much they are able to monetize that going forward. But I presume just companies that use this technology, who can dramatically reduce their cost footprint and or increase their productivity? You know, may standard just make tons of of incremental profit from this. So I’m curious, are there companies out there or sectors out there that you’re watching really closely here in terms of where you think the spoils are gonna, are gonna go to from the AI movement?

Dylan Patel 53:13
So you know, obviously, I’m a chip person, not as much of a software person, although they have been in the past. But I agree with your assessment that the companies that use it are going to benefit tremendously. And so I mean, one of the ones that I’ve played around with Adobe, and their integration of not the language models, but the image models, right, generating images, or being able to take a photo of the beach with this, and that saying, Hey, can you turn this into a snowy field? And it actually does it and it’s amazing, right, these are the these are, this is another sort of Adobe is a really interesting one ServiceNow you know, there’s a lot of companies that have the user, right, so Microsoft and Google, of course, right? I think Meadows really not so far behind both of them. And there’s many others, but the sort of the place that I really focus in on and such as the infrastructure to build, right, and those those those have, obviously, you know, at least sort of an investment type audience have gone up like crazy.

Adam Taggart 54:05
But like, like, like in video, which is trading at like, 40 times sales right now, right? And I mean, just bonkers.

Dylan Patel 54:11
Well, yeah, you look at it’s 40 times sales, but if you, if you project next year’s earnings, it’s maybe only, you know, 40 to 50 times earnings of next year, right. So it’s the amount of growth, the amount of orders that they’re getting, they’re receiving from all the big tech companies like Mehta, like Microsoft and list goes on and on, not just big tech companies, but enterprise enterprises like Walmart, you know, on and on and on. Their orders are absurd, but it’s not just you know, not building everything. There’s so many other pieces of the supply chain around it. But that’s sort of what I’m more so forth focus in on Okay,

Adam Taggart 54:48
um, well, look is this is this continues to play out. We’d love to have you come back on not just to update us on the semiconductor industry too, but also give us updates on what’s happening in AI and what’s catching your attention. Um, last question sort of on this thread is, is, I’ve interviewed one or two people recently on this channel, who are not as steeped in AI as you. But they’ve looked at sort of past technological revolutions, you can say, you know, the launch of the internet, you know, RCA back during the the advent of television, and have seen a pattern where the market wakes up to the market opportunity of the new technology brings a lot of that future market value into the day have, you know, just massive run ups in the companies that are involved in the space. And then it takes more time than the market originally thought for that value to actually be realized. And in most cases, it is realized over time, it just takes a decade or two. And therefore you get things like Amazon stock going to 100 bucks a share in the late 90s, then falling in that case extremely by like 96%, and getting back and then far exceeding 100 bucks a share, but it took a decade or more to get back that original $100 a share. Do you see us being in any danger of repeating that cycle? Right now, given some of the euphoria you’re seeing in the market around AI right now?

Dylan Patel 56:22
I’ve heard you know, sort of that line of thinking and absolutely, absolutely. I’ve heard that line of thinking many times. And absolutely, it’s happened. And it probably happens, again, where we are in that hype cycle stage. I’m not sure right. But I personally believe generative AI is somewhere between the invention of the internal combustion engine and the internet. Right? I think the impact on the world is somewhere between the two, you know, more than the internet, and maybe less than the internal combustion engine. So somewhere in between those two are steam engine, right, is where I think AI is. And the other thing is, the pattern across humanity is the time it’s taken for a new invention to impact the world has gotten shorter and shorter and shorter and shorter. Right. So when this when the steam engine was invented, and when it was actually deployed, was was much longer than when the internal combustion was that which was much longer than when oil refining was, which was much longer than plastics was it which is much longer than the internet, right? It will semiconductors and then the internet and, and you know, cloud computing, right? All these cycles have been shorter and shorter and shorter and shorter. And so AI, we probably do overshoot right on the stocks, without a doubt right on the market value. And we probably tank at some point, I don’t know how much probably not, you know, as much as 96% on companies like Microsoft and Google, but in video, but trouble probably, you know, we overshoot and is that today, or is that is that still a year from now, or I don’t know, I don’t have a crystal ball. But I also think the innovation from it is happening much faster than people recognize. People’s work is already being transformed, you know, much faster than when people were like, Oh, the internet’s gonna be amazing. And I was like, Well, what are we actually doing right now we’re sending each other messages slightly faster. And it took a while for people to you know, get to the point where it’s like, oh, I just consumed all my content on the internet. And, you know, not not not cable TV and oh, I actually use cloud computing, not, you know, a PC, right? And so on and so on, so forth. And it unlocks so much value, I think that time cycle is gonna be much shorter for you. And we probably still do overshoot on valuations, of course.

Adam Taggart 58:25
Okay. All right. Last question. Before we wrap things up. I just have to ask it, because comes glove in hand with AI discussions these days. So, you know, the, one of the fears of AI is the whole Skynet risk, right, which is that one day, you know, this artificial intelligence? The narrative is, is the intelligence is getting faster at an exponential rate. And at some point, it gets smarter than us. And then it gets way smarter than us even faster, because it’s growing exponentially. And at some point, it just takes over. You know, you’ve talked about the AI that we’ve been talking about here is much more brute force. So it’s not really an intelligence organization. But I just got to ask, Which What’s your level of threat assessment on the Skynet risk from this technology?

Dylan Patel 59:17
You know, some days I’m like, yeah, it’s gonna happen. And some days I’m like, no, no, no, it won’t happen anytime soon. I flip flop, but then, you know, if you listen to the people who build it, they fully believe it’s a risk, right? If you look at the heads of opening, if you talk to them, they think, you know, maybe it doesn’t actually just, it’s not just smarter than us or, and kills us. Maybe they think a lot of the time, one of the risk is, hey, this is all of human intelligence, right? They’re in an instant, right? An army of college kids. The next version is an army of doctors and lawyers, and so on and so forth. If the wrong person gets in, you know, a hold of it. They’re able to teach themselves how to create the next pandemic. And they don’t need to be an expert, right? They’re able to figure out oh, I need to mix this molecule and this molecule and hey, with this genetic sequence by To modify it this way, or create COVID 25, which is 100 times more lethal than COVID-19 was right or so on and so forth, right? Or, Hey, I’m able to figure out how to build a nuclear bomb much easier, or, Hey, I’m able to figure out the exact terrorist attack plan to cause, you know, Russia to launch a nuke into Ukraine, which then causes the US to, you know, do this and that, and then sets up a whole chain of reactions with the wrong person directing the AI to help them do something, right. It doesn’t necessarily need to be smarter than us for huge risks. It just needs to be smart enough for bad people to be able to do bad things. Right? So that’s the, that’s what I think is the more risk bigger risks and like, Oh, my God, it’s gonna get so smart, it destroys all of humanity. Because I think even if it gets so smart at all, there will be so many tasks that humans are better than it at, like fine motor skills, right? You know, being able to pick up things be able to put things together, you know, smart things like that, that it might not be great at, you know, even even with robots, right? Or, hey, build the robots to help me do this thing, right? You know, there, there’s a lot of things that won’t be good at that. I think if it’s even if it’s way smarter than us thinking on its own, not like the larger language model is actually thinking on its own, it’ll probably keep us around for long enough that we don’t even realize that it’s already, you know, taken us over and then the switch flips overnight. But really, I think the real risk is a bad person getting in charge of something that doesn’t think it’s just helping them right. Like, hey, I want to generate this and that, and this and that. And there’s actually good reasons to build, you know, something that spreads right with with regards to like, Hey, I’m actually trying to do this thing to specifically target cancer cells. That’s what this will chemotherapy is right, trying to kill. I’ve tried to kill you, but only a specific part of you. Right? That’s what chemotherapy is. So why aren’t there use cases where something like that is being done? And you’re tricking the AI and helping you build something? That’s actually horrible? Right. So that’s sort of what the bigger risk is, to me personally.

Adam Taggart 1:01:45
All right. And super interesting thread. Hard to leave on that. But just real quick, are there institutions organizations Federation’s that are kind of working on standards? Or? Yeah, I guess, practices and standards amongst the folks in the AI community that at least tried to reduce some of the risk of that.

Dylan Patel 1:02:09
So the head of opening is currently on a global tour, basically telling all the governments we need to be regulated. And now there’s two ways to look at it, right. One is, he actually just wants everyone to be regulated. So he gets to, you know, he’s the one in charge, you know, there’s no more competitors for him. And then the other angle is actually he’s worried about this, right. And I, you know, I flip back and forth, what is he? What is he trying to do? What’s his motive here by going to, you know, every country in the world and telling them, we need to regulate AI? Because, you know, there’s a lot of industries that go around. And I mean, this is what the healthcare industry does, right? And the pharmaceutical industry say, hey, regulate us, hey, yeah, we’re the only one of only three people who can produce insulin, even though it’s like eight years old, right? And now we could charge whatever prices we want to charge because there’s one or three people, right, so there’s, there’s this sort of like negative angle of it, too. But there’s not really any institutions doing this yet, in fact, that people were banging on the table for the most regulation or, or doing the AI safety are the people who are actually building these models, right. And opening is only an organization of less than 500 people, right? So it’s like, you know, there are no checks and balances today, but at the same time, they’re rallying for checks and balances. And could it be sort of like a, you know, pharmaceutical industry time trying to get a monopoly over epi pens or, or insulin? What type of example? Or is it actually that like, worried?

Adam Taggart 1:03:26
Right, in the well, I guess the other element here, too, is is, you know, once it’s out in the world, which is more or less is and getting out there more every day? You know, not every every country sees the world the same way. Right? And so you might have a big block that says, yes, it’s important to us. You know, Climate Accords is a good example. Not every country is a signatory of that, right? You might have other countries who just say no, forget it. I’m going to I’m going to do whatever I want with this thing. Right. So all right, we’ll have to leave it there on that. But again, love to have you come back on the program here, maybe we can delve more into some of these more nuanced topics. When you’re back on we have more time to do so. Well, as we wrap things up here, I just want to say Dylan, thanks so much. It’s been a fascinating discussion, and you really have helped me understand this new technology a lot more. For folks that have really enjoyed this discussion. If this was their first time getting exposure to you, where can they go to follow you and your work?

Dylan Patel 1:04:19
Yeah, so we have a website, www dot send me If you want more like official reports, you know, I have a Twitter of course, which is Dylan side TTP, or more. So tweet, my like, you know, open thoughts, which are maybe not as well researched, but you know, my hot takes, which I think are more, you know, personal. And I think those are probably the best two places to find me on Twitter or the website and where we actually publish our research, a lot of it is, is given away for free obviously, as a sort of a means of generating real business. So you know, I think I think you can a lot of people can just read it for free and see if see if they like what we’re doing, you know, me and my team. So I think those are the best two ways.

Adam Taggart 1:04:59
All right, great, and Like I said Dylan doors open here to have you come back on and talk more about this really seminal transition as it continues to evolve, and we get a clearer picture of what’s going on. And again, we’ll have you back on to to talk about the semiconductor industry. All right, well, look, folks, if you’ve enjoyed this discussion with Dylan would like to see him come back on, please to vote your support for that by hitting the like button. Then clicking on the red subscribe button below, as well is that little bell icon right next to it. Don’t want to thank you so much again. It’s been a fascinating conversation. Thanks for having me. All right, everyone else thanks so much for watching.

Transcribed by

The information, opinions, and insights expressed by our guests do not necessarily reflect the views of Wealthion. They are intended to provide a diverse perspective on the economy, investing, and other relevant topics to enrich your understanding of these complex fields.

While we value and appreciate the insights shared by our esteemed guests, they are to be viewed as personal opinions and not as official investment advice or recommendations from Wealthion. These opinions should not replace your own due diligence or the advice of a professional financial advisor.

We strongly encourage all of our audience members to seek out the guidance of a financial advisor who can provide advice based on your individual circumstances and financial goals. Wealthion has a distinguished network of advisors who are available to guide you on your financial journey. However, should you choose to seek guidance elsewhere, we respect and support your decision to do so.

The world of finance and investment is intricate and diverse. It’s our mission at Wealthion to provide you with a variety of insights and perspectives to help you navigate it more effectively. We thank you for your understanding and your trust.

Put these insights into action.

This is why we created Wealthion. To bring you the insights of some of the world’s experienced wealth advisors and then connect you with like-minded, independent financial professionals who will create and manage an investment plan custom-tailored to you. We only recommend products or services that we believe will add value to our audience.  Some links on our website are affiliate links. This means that if you click on them and use the affiliate’s services, we may receive a payment from the vendor at no additional cost to you. 

Schedule a free portfolio evaluation now.