.png)
Arguing Agile
We're arguing about agile so that you don't have to!
We seek to better prepare you to deal with real-life challenges by presenting both sides of the real arguments you will encounter in your professional career.
On this podcast, working professionals explore topics and learnings from their experiences and share the stories of Agilists at all stages of their careers. We seek to do so while maintaining an unbiased position from any financial interest.
Arguing Agile
AA187 - The Future of AI, According to Big Tech
We don't always do reaction videos, but when we do, we do them right!
On this episode, Product Manager Brian Orlando and Enterprise Business Agility Coach Om Patel react to a candid interview with the former Google CEO as he shares provocative insights on the future of AI, Silicon Valley's approach to innovation, and potential disruption to big tech companies.
Join us as we analyze his (somewhat) controversial statements on remote work, startup strategies, and the geopolitics of AI development. Listen or watch to get an inside look at how tech leaders think and debate ethical implications.
• AI agents and large language models will dramatically boost productivity
• Silicon Valley's "move fast and break things" mentality applied to AI
• Geopolitical competition in AI between nations
• Potential for AI to disrupt incumbent tech giants
• Ethical concerns around AI development and deployment
#ArtificialIntelligence #TechInnovation #ProductManagement #AIEthics
= = = = = = = = = = = =
Watch it on YouTube
= = = = = = = = = = = =
Subscribe to our YouTube Channel:
https://www.youtube.com/channel/UC8XUSoJPxGPI8EtuUAHOb6g?sub_confirmation=1
Apple Podcasts:
https://podcasts.apple.com/us/podcast/agile-podcast/id1568557596
Spotify:
https://open.spotify.com/show/362QvYORmtZRKAeTAE57v3
Amazon Music:
https://music.amazon.com/podcasts/ee3506fc-38f2-46d1-a301-79681c55ed82/Agile-Podcast
= = = = = = = = = = = =
Toronto Is My Beat (Music Sample)
By Whitewolf (Source: https://ccmixter.org/files/whitewolf225/60181)
CC BY 4.0 DEED (https://creativecommons.org/licenses/by/4.0/deed.en)
welcome to the Arguing Agile Podcast, where Enterprise Business Agility Coach Om Patel and Product Manager Brian Orlando argue about product management, leadership, and business agility, so you don't have to. I watched a video a while ago with Eric Schmidt, who is the former CEO of Google back in the day, and then, the president of Alphabet, the holding company of Google for quite a while. I watched a video of him at Stanford University, and then the video got taken down, which only attracted my attention to it even more. And because of our working agreement, I kind of let it slide and go back to it. But I think the video is important. I think people should watch it and I would like to watch it with you and kind of get your reaction to what he's saying. Absolutely. Sounds like a plan. This came up from a previous podcast where I said that people should watch this video just because it's good to understand how These folks think If you're just a developer or a scrum master On a team and you're trying to do the best you can with your insurance app or your banking app or whatever And you got people yelling at you for features like You live in a world, right? These people live in a completely separate bubble universe. Yeah, it might as well be. And it's important for you if you want to move up or around, or if you're a product manager, you want to move to different verticals, different business segments. It's important to understand how these people think. Yeah, I agree I think it's important to understand how they think if you don't want to move down You know because it's going to happen around you right? So understand the world that is around you Hopefully we can shed some light on that. Yeah, So, I, I watched this when it first came out, before it got pulled down, a long time ago. And I don't think Om has watched the entire video. I have not watched it. We're gonna watch the video in its entirety, and hopefully, we won't get pulled down this video has been pulled down several times since it went up. Right. I don't know if it was something he said in the video. Or if it's just Stanford being overly cautious about their IP or whatever. But yeah, so don't be surprised if this video gets pulled down. Where do you see AI going in the short term, which I think you defined as the next year or two? Things have changed so fast, I feel like every six months I need to sort of give a new speech on what's going to happen. Can anybody here, the computer, there's a bunch of computer science in there. Can anybody explain what a million token context window is for the rest of the class? Basically it allows you to prompt with like a million tokens or a million words or whatever. So you can ask a million word question. so the question they asked him was where is AI going in the next year? Which is like a real lowball question. This is the softball portion of this event. And he opened up by saying, Does anybody know what a million token prompt window is? The student, or I don't know who, these are like CS majors or Econ majors or something like that, right? Yeah. That's Stanford. But they're Stanford, so they're they're special. He said, well, it's basically like a million words. It's not really a million words, but, Sure, it's like a million words. So you could feed it an entire novel, probably several novels in a million words. I mean a million words is like more than all of the, you could feed it all of the podcasts we have ever done. And then ask it a question based off of that, yeah, so it's big. It's big. So let's see where he's going to go with this. Can anybody here give a technical definition of an A. I. agent? Again, for your silence. So he hasn't answered the question. He's, he's, well, he's setting up. Yeah, yeah. Again, this is Silicon Valley, like sorry, I don't know what to say on the podcast. I was going to say, this is typical Silicon Valley activity here this is why we should do a live stream. Ha ha ha. All of our live streams should be like reactions video. Like we're going to, Hey guys, on this date at this time, we're going to react to this video, join us if you want to, you know what I mean? That would be pretty sweet. I wonder if we can do that. And then like phone people in for segments, that would be cool. And then put people's little video windows up in the corner. An AI agent the agent is basically the, the ability for the. A IML to prompt an outside tool to go pull some text in, scrape the web, do whatever I do this sometimes with old chat gyp uh, where I'm like, Hey, chat, gyp, stop hallucinating. Oh, posting a LinkedIn. That was a great example. Great example. I asked old Chad Chippity. I tried it with Claude too, but Claude's like, I like bread. I was like, you're drunk, Claude. Get out of here. But I asked old Chat Chippity like, Hey, here's the function I'm using in Python. And I'm posting to LinkedIn. How can I add an image to what I'm posting on LinkedIn? And old Chad Gippity gave me a response and I looked at the response and because I can write Python code, my first inclination looking at the response was, are you having a medical issue, Chad Gippity? Because what it gave me was, and every time I went back and I said, no, this is wrong , this is not what I want you to do. Every time I did that, it gave me a longer and longer function with more code. And I was like, stop, stop. I was like, what I need is a block like that. So I copied. The LinkedIn API documentation , the URL, and I pasted it in the chat window. I was like, go read this, get informed, and then give me a better answer because your answer is garbage. So, it absorbed the page, it gives you a little indicator, it's like, hey, now we're reading the webpage, and whatever. And then it reads the webpage, it processes it, and then it gives you a better response. And that's the definition of an agent. Yeah. That's an agent. Yeah. It just goes and does something for you. Yeah. And, in the future there might be agents that can interact with files that are on your computer and do other things. Like, right now it's a little limited. But he's asking if there's an agent because he's he's trying to take people to the future So an agent is something that does, that does some kind of a task. Another definition would be that it's an LLM state and memory. Okay. Can anybody, again, computer scientists. Can, can any of you define text to action? He's going somewhere with this. I'm, I'm not even gonna bother taking text and turning it into an action. Right, right here. Go ahead. Yes. Instead of taking text and turning it into more text, taking text, and have the AI trigger actions based on this problem. So another definition would be language to Python, a pro programming language. I never wanted to see survive and everything in AI is being done in Python. So I'm guessing he's a fan of Flash , I don't know what he's like. Yeah. I don't know where he's going with this one. He's an academia. He might like are better. Oh man , do we need to take the afternoon off to do a simple loop? okay. So he has strong reservations about Python. Is he an original C guy? Like is he assembler? I mean, I understand like the hangup with Python is like, it has a lot of libraries and a lot of those libraries are. Terrible. I get it. But it's such a robust ignore the garbage stuff. It's a lot of good stuff. I don't understand. Anyway, let's go. New language called Mojo that has just come out, which looks like they finally have addressed AI programming. But we'll see if that actually survives over the dominance of Python. We'll see. One more technical question. Why is NVIDIA worth two trillion dollars? And the other companies are struggling. Well, I got you on that one. Why is Nvidia worth 2 trillion? Because because all of the people on wall street bets. Exactly. It's an emotional thing. That's what it is. All the people on wall street bets. And also because insider trading is legal. That's why , those are the right answers. Actually. And the dominance of one company in the market. Right. And monopolistic tendencies. Exactly. I think it just boils down to like most of the code needs to run. The CUDA optimizations that currently only NVIDIA GPUs support, so that other companies can make whatever they want to, but unless they have the ten years of software there, you don't have the machine learning optimizations here. I like to think of CUDA as the C programming language for GPUs. That's the way I like to think of it. It was founded in 2008. I always thought it was a terrible language. And yet it's become dominant. There's another insight. There's a set of open source libraries which are highly optimized to CUDA and not anything else. And everybody who builds all these stacks, right, this is completely missed in any of the discussions, right? It's technically called VLLM, and a whole bunch of libraries like that. Highly optimized CUDA, very hard to replicate that if you're a competitor. So, what does all this mean? In the next year, you're going to see very large context windows. Agents and text to action. When they are delivered at scale, it's going to have an impact on the world at a scale that no one understands yet. Much bigger than the horrific impact we've had on, by social media. Right, in my view. So here's why. I was projecting the audio, but not the video for what he was saying. let me stop right there He just pointed out, he just said, because programming is optimized for CUDA, for Intel processors they can do things and Intel will be more expensive, but also because the world in the future will be different than today. I honestly forgot what the original question was. I think that was the idea. where will we be in 12 to 18 months from now? We'll be in the next year of AI. I think he may be getting into it by saying it's on, the impact is on a scale that nobody understands. That's where that was his latest thing, but he hasn't really elaborated his answer yet. In a context window, you can basically use that as short term memory. And I was shocked that context windows get this long. The technical reasons have to do with the fact that it's hard to serve, hard to calculate and so forth. The interesting thing about short term memory is when you feed the, you ask it a question, read 20 books, you give it the text to the books is the query. And you say, tell me what they say, it forgets the middle, which is exactly how human brains work too. Right? That's where we are. With respect to agents, there are people who are now building, essentially, LLM agents, and the way they do it is they read something, like chemistry, they discover the principles of chemistry, and then they test it, and then they add that back into their understanding. That's extremely powerful. And then the third thing action. So Whoa, I'm not gonna let him get away with just moving on past like, Hey, we're gonna, we're gonna, we're gonna read chemistry. We're going to understand here's the different chemicals and the way they might interact with each other. And then we're going to test it to make sure it like actually reacts that way. Except AI, Again, my experience with coding in AI, which is in Python, by the way the most terrible language ever there. Nevermind. Is the LLM will just make up some stuff that it thinks is true. And then I'll run the script and find out that it's not true. And then I will tell the LLM you're wrong. here's the URL to the documentation. That tells you why you're wrong and then it will say, Oh yeah, I'm wrong , why were you wrong in the first place? like it only knows what it's been fed. So I think that's true in his example too, by the way. I know he's kind of glossing over it, but you know, you don't just feed it all the text and it learns chemistry. It's going to make wrong decisions based on that. the glancing blow that we just observed is now that the context windows are so big and you had access to so many resources, it can take your code or your chemistry or do whatever and then spin up an instance. And actually test that it works. And then it's not going to give you back a hallucination it's going to give you back a tested working answer. That would have been pretty revolutionary as an answer. It's like, Hey, that we can spin up whole AWS instances, spin up code, spin up virtual machines. Try your code inside of the sandbox. And if it fails and it gives you an exception or whatever, then I'm just going to regenerate your code without the exception, so basically it's, it's, it's generating and giving you bulletproof products when you ask it simple questions. It could do that. It could do that. and all the constructs he's described so far like agents, maybe an LLM could spin up an SLM that goes out and does that and feeds back the learning. So. It learns from itself, but it also learns from other models. The government is in the process of trying to ban Tik TOK. We'll see if that actually happens. If TikTok is banned, here's what I propose each and every one of you do. Say to your LLM the follow up. Make me a copy of TikTok. Steal all the users. Steal all the music. Put my preferences in it. Produce this program in the next 30 seconds. Release it. And in one hour, if it's not viral, do something different along the same line. That's the command. Om, you're up. You're up first. Oh my Lord. So you understand what he's saying though, right? Yeah, yeah, yeah, yeah. As far as the power of AI is concerned, if you think about it, just from that theoretical perspective, you could do that. But is he really advocating that people steal commercial products? Let's do a little arguing agile on this one. This is a very like Silicon Valley way of thinking. It's like, Hey, if the government bans this thing and the main company can't compete in the space and the vacuum is open up in the space this is the Google product philosophy for a lot of their stuff, which is don't try to grab an audience out of thin air and invent something new for that audience's pain point. Come up with something that another company is doing maybe not very well. Redesign it. So we can lose if it doesn't work out basically and just steal stuff and try to corner the market. I mean, this is like the Amazon, we're just going to compete with your mom and pop business and do it better at scale. And then when we do it, we'll put you out of business we're not reinventing any wheels here. We're doing the same thing you're doing. But better with better talent and at scale. It's just the way that these people think it's a Silicon Valley mentality. It really is. I understand how powerful that is. If you can go from arbitrary language to arbitrary digital command, which is essentially what Python in this scenario is, imagine that each and every human on the planet has their own programmer that actually does what they want, as opposed to the programmers that work for me, who don't do what I ask, right? Oh my programmers don't do what I want. Yeah. Well, very funny. Is he down here? Oh, so do it yourself, man. Very funny. It's very funny. It did. Also, this is like every no code solution I've ever heard of in the world where it kind of does what you're asking for, but it kind of doesn't, and it's also unmaintainable and also it's unreadable code. So you end up. Wiping away large swaths of the media. They've got deep pockets. Spoken like someone who's never developed a day in their life. Right here. Yeah, and like you said earlier, they can afford to lose. Sure. Alright, well, let's keep going because I'm aggravated. The programmers here know what I'm talking about. So imagine a non arrogant programmer that actually does what you want. And you don't have to pay all that money to it. So you asked about what else is going to happen. Every six months I oscillate. So we're on a, it's an even odd oscillation. So at the moment, the gap between the frontier models, in which there are now only three, I'll review who they are, and everybody else appears to me to be getting larger. Six months ago, I was s so he, he doesn't del he doesn't go into it, but I, I will it's open ai. Right. Claude, Anthropic and Facebook. So he doesn't, he's not diving into it here. Convinced that the gap was getting smaller. So I invested lots of money in the little companies. Now I'm not so sure. And I'm talking to the big companies and the big companies are telling me that they need. 10 billion, 20 billion, 50 billion, 100 billion. Sam Altman, he's a close friend. He believes it's going to take about 300 billion, maybe more. I pointed out to him that I'd done the calculation on the amount of energy required. and I then, in the spirit of full disclosure, went to the White House on Friday and told them that we need to become best friends with Canada. Because Canada has really nice people, helped invent AI, and lots of hydropower. Because we as a country do not have enough power to do this. The alternative is to have the Arabs fund it. And I like the Arabs personally. I spent lots of time there, right? But they're not going to adhere to our national security rules. Whereas Canada and the U. S. are part of a triumvirate where we all agree on security. So these 100 billion, 300 billion data centers, electricity starts becoming the scarce resource. If you follow this line of reasoning, why did I discuss CUDA and NVIDIA? If 300 billion is all going to go to NVIDIA, Wow. So the limiting factor really is energy as far as he's concerned. Right. When we get to that point, but he's not. Considering the fact that the efficiency at which LLMs are working isn't going to stay the same, right? And if you look back, I don't know how many years, if you look back, the efficiency has been getting better and better and better. So maybe at some point, theoretically, if you do the extrapolation, you could say, yeah, we're going to need Gazillion, megawatts of power, but I don't think that's reality. I think we need to just let things go the way they are and see, right? Competition is a great thing. The three that he didn't name, that's at least three. It's not one. Right. That's a good thing., I don't know, his claim here, and he might go into it later in this video, I can't remember, is like only the people with hundreds of billions of extra cash laying around can make this happen. Yeah, I'm not buying that. Yeah, I don't, I don't know , maybe if you're listening to this and you're like, Oh no, Brian, you need a constant influx of new data to train your LLMs, your new generation LLMs. And that's where the money comes in. You know, I kind of understand that. That's not really the claim he's making. He's just saying like, we just need more capital and more capital is a better product. And, I feel that is an argument that is as old as time. Yes, it is. If you just give me more money, I'll give you a better product. Exactly. I can build you a better pyramid. I think LLMs will be training themselves they already are doing that by the way. I want a bigger pyramid what color would you like? Sorry, we only got brown Brown and gold are the only colors we can produce. There you go you were at Google for a long time and they invented the transformer architecture? Yeah. It's all Peter's fault. Thanks to brilliant people over there like Peter and Jeffine and everyone. But now it doesn't seem like they've lost the initiative to open AI and even the last leaderboard I saw, Anthropics Cloud, was at the top of the list. I asked Sundar this, he didn't really give me a very sharp answer. Maybe, maybe you have a sharper or more objective explanation for what's going on there. I'm no longer a Google employee. Yes. In the spirit of full disclosure. Google decided that work life balance and going home early and working from home was more important than winning. Ha ha ha. Whoa! Oh, shots are fired. Okay, yes! Alva fired across the ball. This is, by the way, this is what got the, I think this is part of what got the video pulled down. this statement, it got, it got such a negative reaction. It got the video pulled down. This is, it's such a minor quip of nonsense that he's throwing out here. I don't know. Working from home is the opposite of winning, apparently. Obviously , come on. Let's listen more. And the startups, the reason startups work is because the people work like hell. And I'm sorry to be so blunt, but the fact of the matter is if you all leave the university and go found a company, you're not going to let people work from home and only come in one day a week if you want to compete against the other startups. Hang on one second All right. I cleared him off the screen. Om, the floor is yours. He's gone. So what I'm hearing him say is he doesn't trust people to work unless he can see them unless they're in the office. Well, also he was the CEO of Google from 2001 to 2011 for 10 years. And then he was the CEO of Alphabet from 2011 to 2017 for six years. I think currently he's out of a job. So basically he's unemployed. Telling us that he doesn't trust people working from home. In the tenure that he had with these companies in these leadership roles, working from home really wasn't that prevalent, if at all. 2001 to 2011, I believe it, the tech tools for collaboration remotely were garbage. They were terrible in that era. I completely believe it. The tools were garbage, okay? But now we have virtual whiteboards. We have tools where you can go to a room and see who is online and available to collaborate and just pull them in we have tools for this now. So, this we're bad at facilitation, that, that you're pointing one finger towards The people that want to work from home and four fingers back at why are we bad at facilitating? Why are we bad at value creation and understanding the concepts of like what makes good products remotely that somehow we need to be in person to do that? You don't need to be in person to do that You just need better facilitation and better value creation as a company. It has nothing to do with in person or not. Facilitation. They may not have, cause they didn't need it until maybe until COVID hit perhaps. He's not acknowledging that this video is only, well, how old is this? Like August? It's less than a year old. Yeah. So it was post COVID and COVID changed a lot of things for us, right? Not for him. Also he has been working. So, there is that. I mean, look, I'm at a loss for words. when I see this, and we say this all the time now with companies insisting people go back to work, at least part time initially, they'll say part time, but of course they're going to say full time and they're trying to justify their real estate leases. I get that. But at the end of the day there's a fundamental lack of trust between leadership like that. The employees. Yeah. Right. So I agree with you that you're pointing one finger. Outwards and for inwards because as a leader it's on you to create the conditions where people can excel Yeah, what like why buy some facilitation skills? Why can the people in your company not develop enough? Facilitation skills where they that one day a week in the office Is the day that everybody looks forward to right, right. Forget the fact that like the same people that are taking advantage of the global workforce a global distributed workforce We got people in India got people in Costa Rica got people in Argentina and Brazil People in Europe the same people are taking advantage of that Are turning around in this video and saying like oh Remote culture is killing innovation That is such a blinkered point of view. I feel like they they're just doing themselves a disservice. You're right They reached out offshore because the rates were cheaper. So the just want to be clear before we move on Yeah, just to get all my dump truck of salt backed up and dumped onto the table before we move on so you're saying Innovation only happens in the office when I can hire people in a small geographic area within a 30 minute to an hour drive. Of my home office. That's where the best innovation happens. And you're telling me that with a straight face, CEO of a holding company. CEO of a holding company that says people can, cause they have to in California live an hour, hour and a half away, but we'll put them on buses and we'll put wifi on them. So they continue working on their trip in and out, welcome to arguing agile Brian, you seem somewhat unhinged on this. Yeah, let me be real clear about this. What you're saying is I have to have people within a 45 minute commute of my home office to be creative and successful as a company. That's what you're saying. Are you sure you want to gamble your chips on that? I'm not Eric Schmidt in the heyday of the internet, 2001 to 2011. the money printing era of the internet the auto manufacturing of 1930 to 1965. Era of car manufacturing where we must have been doing something successful because how are we selling a million Chevy Impalas a year when the cars off the lot would barely run, there's very few people listening to this podcast that will understand like the era of, you remember the The, I think they did this in like a madman episode where they bought a new car off the lot and the car barely ran without competition where every single. Stone that we throw is a smashing success because there's literally nobody on the market, right? This entitled attitude here in my professional work as a product manager, this is the hardest thing to push against. As an agile coach in organization, when you interface with people that like are like project managers, 20 years experience, and they're like, well, you know what? We just got to micromanage people even harder they're just not working hard enough. Ohm. That's the problem. When you interface with these types of people I would imagine that you know, that like, you're not going to change these people's opinions or mindset. It's just, you gotta this is the messy stuff that you just got to deal with as best as possible. Try to insulate your team from, and then like, move on as fast as possible. Because this attitude, this entitled attitude, is not going to change. this is exactly what we call the fixed mindset, right? And I run across this every single day. I think pretty much everybody does if they're an agile coaching or even scrum masters, right? So you roll up your sleeves and you say, okay, I'm not going to win City Hall, but I'm going to make a difference over here in my little microcosm, right? And then I'll evangelize the success. Of this team and more people who want to come on board. That's really how you do it. one step at a time, but yeah, this is very, very prevalent, unfortunately. Rough. This is rough. Welcome back, Eric Schmidt. You got a terrible fixed mindset. Let's keep going. Google, Microsoft was like that, but now it seems to be, there's a long history of, in my industry, our industry, I guess. Of companies winning in a genuinely creative way and really dominating a space and not making the next transition. It's very well documented and I think that the truth is, Founders are special, the founders need to be in charge, the founders are difficult to work with, they push people hard. As much as we can dislike Elon's personal behavior, look at what he gets out of people. I had dinner with him and he was flying that night at 10pm to have a meeting at midnight with x. ai. I was in Taiwan, different country, different culture. And they said that there's a TSMC, I'm very impressed with, and they have a rule that the starting PhDs coming out of the good, good physicists work in the factory on the basement floor. Now, can you imagine getting American physicists to do that? The PhDs? Highly unlikely. First of all, yes, I can. I can imagine people, regardless of their educational background, working on the factory floor. Although we haven't admired his absolutely fantastic Cosby sweater until this point. I just want to point that out. Cosby or Mr. Rogers. Hey, I don't think that's, I think that's clearly a Cosby sweater. I don't think that's right. I think it is. Different work ethic. And the problem here, the reason I'm being so harsh about work is that these are systems which have network effects. So time matters a lot. And in most businesses, time doesn't matter that much, right? You have lots of time Coke and Pepsi will still be around and the fight between Coke and Pepsi will continue to go on and it's all mega businesses, by the way, large businesses he's talking about. When I dealt with telcos, the typical telco deal would take 18 months to sign. There's no reason to take 18 months to do anything. Get it done. We're in a period of maximum growth, maximum gain. This spoken like someone who's never worked at a telco right there like, Oh, 18 months. There's no reason for that. Just get it done is, is something that I don't think has ever been spoken at a telco ever, but it's not even the telcos that are the limiting factor here, right? You're talking about legislation, you're talking about regulations. It takes time. I haven't worked at a telco but I have worked in electric and natural gas delivery, which I feel is probably very similar to working at a telco and things move in those industries at a snail's pace compared to a typical tech company. Right. Okay. I can't really come down on him for saying like, Hey, that's not really fair. This is the way he thinks. Right. Yeah, he's choking up to that. Oh, and also it takes crazy ideas. Like when Microsoft did the OpenAI deal, I thought that was the stupidest idea I'd ever heard. Outsourcing essentially your AI leadership to OpenAI and Sam and his team. Me too. I mean, that's insane. Nobody would do that at Microsoft or anywhere else. And yet today, they're on their way to being the most valuable company. They're certainly head to head in Apple. Apple does not have a good AI solution. And it looks like they made it work. I mean, in hindsight, look, everybody thought that, but yeah, that's what big companies do. Big companies, they don't innovate. They don't buy a solution exactly. And if they didn't partner with a company, I don't understand why he just said that to be completely honest, I guess he probably is saying for the value, like Microsoft stock has gone up crazy in the last two years, but for the value of Microsoft, you would think they have. They would have homegrown their own solution or wholesale bought one of these AI companies and then merge it in it. Maybe that's where he's coming from. He's like, I can't believe they would let somebody else develop this stuff and not just bought one. Cause, he hasn't been a shot caller with, with the ability to throw billions and billions in. Potentially trillions, I guess, in this case, of money around since 2017. So maybe that's part of what he's commenting on, you know? Yeah, and I could make the same argument with, for IBM. They they're behind on the AI front, right? Sure, but I mean at the point where you're IBM, I mean, at that point, how many different things is IBM betting on? Being successful. I mean, you could bankrupt the whole company being like, we're going to buy all these different bets. We think AI is going to be the next thing. We think switching is going to be the next thing. We think 5g technology is going to be, they could buy their whole net worth 10 times over. Buy or build something. Right. I mean, somebody like IBM could have built their own AI solution, right? If it were me, the product manager in this equation, I would say, what would be the cost of us building our own thing from scratch versus us just partnering and 50, 50 with one of these companies. I mean, that's a, I've done that with other companies that I've been with before and a 50, 50 split of revenue is like a no brainer. I mean, it comes out as. Way cheaper every single time. the actual corporate partnering of with other companies, that is such a high level decision in normal companies that it's just off the table for people doing a bit like for me as a typical product manager on one product line, I couldn't make a deal to just say, you know what, we're not going to do anything AI related. We're going to farm all that off to open AI. We're not invited to that day. We're not even right. Yeah. Yeah. Right. Yeah. National security or geopolitical interest, how do you think AI is going to play a role or competition with China as well? So I was the chairman of an AI commission that sort of looked at this very carefully. And you can read it. It's about 752 pages. And I'll just summarize it by saying we're ahead. We need to stay ahead and we need lots of money to do so. Our customers were the Senate and the House. And out of that came the CHIPS Act and a lot of other stuff like that. If you assume the frontier models, drive forward, and a few of the open source models, it's likely that a very small number of companies can play this game. Countries with a lot of money, and a lot of talent, strong educational systems, and a willingness to win. The U. S. is one of them. China's another one. But certainly, in your lifetimes, the battle between the U. S. and China for knowledge supremacy is going to be the big fight. So the U. S. government banned essentially the NVIDIA chips, although they weren't allowed to say that was what they were doing, but they actually did that into China. They have about a 10 year chip advantage. We have a roughly 10 year chip advantage in terms of sub DUV, sub 10 years, that long? Roughly 10 years. Wow. so an example would be, today we're a couple of years ahead of China. My guess is we'll get a few more years ahead of China, and the Chinese are whopping mad about this. It's like hugely upset about it. So that's a big deal. That was a decision made by the Trump administration and approved by the Biden administration 10 years. Cool. I understand five years at the most by professional opinion. 10 years might as well be like 20 light years, 10 years. No, I don't think so. I don't think so. Do you find that the administration today in Congress is listening to your advice? Do you think that It's going to make that scale of investment, I mean, obviously the CHIPS Act, but beyond that, building a massive AI system. So as you know, I lead an informal, ad hoc, non legal group. Which includes all the usual suspects. And the usual suspects, over the last year, came up with the basis of the reasoning that became the CHIPS Act. By the administration's AI Act, which is the longest presidential directive in history. You're talking about the Special Competitive Studies Project? This is the actual, the actual Act from the executive office. And they're busy implementing the details. So far, they've got it right. one of the debates that we had for the last year has been, How do you detect danger in a system which has learned it, but you don't know what to ask it? Okay, so in other words, it's a core problem. It's learned something bad, but it can't tell you what it learned, and you don't know what to ask it. And there's so many threats, right? Like it learned how to mix chemistry in some new way, but you don't know how to ask it. And so people are working hard on that, but we ultimately wrote in our memos to them that there was a threshold which we arbitrarily named as 10 to the 26 flops, which technically is a measure of computation that above that threshold, you had to report to the government that you were doing this and that's part of it. Oh no, Om you've learned how to hijack the algorithm to feed. Things to people That make them aggravated and therefore engage with the content as opposed to the content that's actually good for them. You just described LinkedIn or something. Oh yeah, that's fine.. Alright, let's continue listening. Yeah. I think all of these distinctions go away because the technology will, now, the technical term is called federated training. Where basically you can take pieces and union them together. So we may not be able to keep people safe from these new things. when the AI ML does it, it's a problem. But when companies do it for profit, it's not a problem. That's an interesting standpoint, isn't it, really? That is, Yeah, I mean, it's arbitrarily deciding that, firstly. And secondly, isn't OpenAI now going for Profit instead of non profit. Well, let's talk about a real war that's going on. I know that something you've been very involved in is the Ukraine war, and in particular I don't know how much you can talk about White Stork, and, your goal of having five hundred thousand, five hundred dollar drones destroy five million dollar tanks. How's that changing warfare? I worked for the Secretary of Defense for seven years. They gave me a medal. So they must give medals to failure, or whatever. But my self criticism was, nothing has really changed. And the system in America is not going to lead to real innovation. So, watching the Russians use tanks to destroy apartment buildings with little old ladies and kids just drove me crazy. So I decided to work on a company with your friend, Sebastian Thrun, as a former faculty member here, and a whole bunch of Stanford people. And the idea, basically, is to do two things. Use AI in complicated, powerful ways for these, essentially, robotic war. And the second one is to lower the cost of the robots. Now, you sit there and you go, why would a good liberal like me do that? And the answer is that the whole theory of armies is tanks, artilleries, and mortar, and we can eliminate all of them and we can make the penalty for invading a country, at least by land, essentially be impossible. It should eliminate the kind of land battles. Historians many years from now will look at my ridiculous commentary on the arguing the agile podcast as like Eric Schmidt thinks he's like doing something bold and innovative and new, coming up with new methods of warfare. Imagine all the people in world war one that were trained that like this is the way to do warfare you, you ride a horse, you ride them forward, everyone lines up in lines, and then people got in a ditch and started shooting each other. And then nobody knew what to do. we can't move forward. They're in a ditch and we. Get out of our ditch to get into their ditch. We get killed and then someone's like what if we just drove a big metal truck Across from our ditch to their ditch the wonderful thing about us listening to a pod to to to a talk like this is you can, you can hear the entitlement. You know what I mean? You can hear the oh, oh, all my ideas are amazing. But the rest of us that are just working for random insurance company, or like Bob's Trucking Company, or whatever you're just building a metal box with wheels and you're driving it over to the other trench with machine guns on the side. I mean, it's a, it's a drone, a, like an Amazon drone, but with like a flamethrower on top of it. That you can target down like it's his point is it's cheap, right? So we could get it we could have millions of them. I get it and they're cheap and these tanks are you know Docile and they're expensive so we could destroy them all I get it. Wasn't this the whole cold war strategy was to outspend the okay. One of the things to know about war is that the offense always has the advantage. Because you can always overwhelm the defensive systems. And so you're better off as a strategy of national defense to have a very strong offense that you can use if you need to. And the systems that I and others are building will do that. want to switch to a little bit of a philosophical question. So there was an article that you and Henry Kissinger and Dan Huttenlocher wrote last year about the nature of knowledge and how it's evolving. I had a discussion the other night about this as well. So for most of history, humans sort of had a mystical understanding of the universe. And then there's the scientific revolution and the enlightenment. And in your article you are, argue that now these models are becoming so complicated and difficult to understand that we don't really know what's going on in them. I'll take a quote from Richard Feynman. He says, what I cannot create, I do not understand. I saw this quote the other day, but now people are creating things. They do not, that they can create, but they don't really understand what's inside of it. Is the nature of knowledge changing in a way, or are we going to have to start just taking the word for these models without them being able to explain it to us? The analogy I would offer is to teenagers. If you have a teenager, you know they're human, but you can't quite figure out what they're thinking. But somehow we've managed in society to adapt to the presence of teenagers, right? And they eventually grow out of it. So, it's probably the case that we're going to have knowledge systems that we cannot fully characterize. But we understand their boundaries, right? We understand the limits of what they can do. And that's probably the best outcome we can get. The consensus of my group that meets on every week is that eventually the way you'll do this so called adversarial AI is that there will, there will actually be companies that you will hire and pay money to, to break your AI systems. Like Red Team. So instead of human Red Teams, which is what they do today, you'll have whole companies and a whole industry of AI systems. That's happening already, but we just don't know enough about it, but it's starting to happen. I think by the time we find out that's a normal industry, it'll be too late. I think that the people that are out there that are actively working to poison the training of new AI models I think it'll be too late. Yeah, that's interesting that when he just said that's super interesting. I agree and scary in certain I'm sure he's not gonna dig deeper into that one because that is a real like an actual business and Evading that is very interesting because it's a difficult thing to do. That makes sense to me. It's also a great project for you here at Stanford, because if you have a graduate student who has to figure out how to attack one of these large models and understand what it does, that is a great skill to build the next generation. So it makes sense to me that the two will travel together. well, you have to assume that the current hallucination problems become less, right, as the technology gets better and so forth. I'm not suggesting it goes away. And then you also have to assume that there are tests for efficacy. So there has to be a way of knowing that the thing succeeded. So in the example that I gave of the TikTok competitor, and by the way, I was not arguing that you should illegally steal everybody's music. What you would do if you're a Silicon Valley entrepreneur, which hopefully all of you will be, is if it took off, then you'd hire a whole bunch of lawyers to go clean the mess up, right? But if nobody uses your product, it doesn't matter that you stole all the content and do not quote me, right? Right, dude, you're on camera. Oh, yeah, that, that's the part where they said, oh, ha, ha, ha, you're on camera. He said if you steal it and you're successful. Then you need to hire lawyers to clean the mess up after yourself. But if you steal it, and nobody cares that you stole it, Then it's not a theft. Then it's cool. It's fine. Ha ha ha, you're on camera. You're on candid camera, ohm. You're in the candid camera of IP theft. Apparently. So it's still, it's still crime. I think this is part of the key reason this video is pulled out. I didn't expect that , he's he's saying the quiet part out loud of like, this is just normal Silicon Valley. A podcast or two ago where we talked about Hey, just launch a black cab service in London. And then when they like, when they complain to you, you're like, Hey, Oh, it's London. So like, Oh, sorry. Oh, I'm sorry. I don't know why Australians are in London. I have no idea. But they're like, Hey, I have to spend a lot of money on this license. I have to go do a bunch of trains or whatever. And you're just spinning up with no nothing , that's not fair, like when they do that, stick the lawyers on them. You're right, this is typical Silicon Valley thinking. That, that is, we know that's out there, right? that's been out there for a while. He's one of the few that actually says this out loud, right? So, what's wrong with that? That doesn't warrant taking the video down in my opinion. we understand, but again, like when you, there's a certain cutoff in experience where you're like 15, 20 years, you're like, Hey man, like everyone's kind of stealing. Yeah I feel like in the music business, like everyone's kind of stealing everyone else's songs. And everything becomes a reinterpretation of the previous generation. So like, I feel this was part of why this could be this. And then the thing about like, Hey, you can't work from home both of those things together probably is why this video is getting taken down. Yeah. I can certainly align with that, Come on, let's listen. I think we're done with the clickbait. In other words, Silicon Valley will run these tests and clean up the mess. And that's typically how those things are done. so my own view is that you'll see more and more performative systems with even better tests and eventually adversarial tests. And that will keep it within a box. The technical term is called chain of thought reasoning. And people believe that in the next few years, you'll be able to generate a thousand steps of chain of thought reasoning. Right? Do this, do this. It's like building recipes. That the recipes, you can run the recipe, and you can actually test that it produces the correct outcome. And that's how the system will work. Right. Well AI, ML cannot do that. Right now, which is saying, Hey, actually go test this thing, go build this product, put it out in the app store, see how many people download it. And if it's not above a certain threshold or way form or whatever, pull it back, change it, iterate. Do something else. Exactly. That really is the new way where it's going to learn to do that. Maybe we'll get there. I mean, that is the terrifying future that he is kind of scared of is that you'll have a bunch of AI algorithms, basically iterating your application. And then the application will kind of run out of control and run amok. Right. Yeah, yeah. Makes sense. Solid may be with him on that one, because it takes a certain amount of skill to craft something like that. Just like people make terrible assumptions. AI makes terrible assumptions. again, I use ChatGPT for coding occasionally and I ask it for something and it, and it just like. Is dreaming about a response. It doesn't actually try the response on a real system with real data. The amounts of money being thrown around are mindboggling and I've chose, I essentially invest in everything 'cause I can't figure out who's gonna win. And the amounts of money that are following me are so large. I think some of it is because the early money has been made. And so you can do when you're building, you invest in everybody, a hundred million is now an AI investment. So they can't tell the difference. I define AI as learning systems. I think that's one of them. The second is that there are very sophisticated new algorithms that are sort of post transformers. My friend, my collaborator for a long time, has invented a new non transformer architecture. There's a group that I'm funding in Paris that has claims to have done the same thing. There's enormous invention there. A lot of things at Stanford. And the final thing is that there is a belief in the market that the invention of intelligence has infinite return. So let's say you have, you put 50 billion dollars of capital into a company. You have to make an awful lot of money from intelligence to pay that back. So it's probably the case that we'll go through some huge investment bubble. And then it'll sort itself out. That's always been true in the past. And it's likely to be true here. And what you said earlier was you think that the leaders are pulling away right now, what you just threw out was an assumption. And on the arguing agile podcast every assumption must be tested indeed and we are not artificially intelligent. We are just so what he said was I'm gonna throw 50 billion at this artificial intelligence problem Yeah And if you don't also throw 50 billion like as a nation state great power competition Whatever if you 50 billion at it, then you will Be it a direct disadvantage from my 50 billion like for like, except like in the realm of software, like that's patently not true. Like I, you can throw a 50 billion at it and be over everyone's shoulder in everyone's pockets being like, when's it going to be done? And then I can throw 5 billion at it. And get way more results. You certainly can. Yeah. It's not a linear thing, right? It's not linear that your investment yields linear results. So this open source versus closed source debate in our industry is huge. And my entire career was based on people being willing to share software in open source. Everything about me is open source. Much of Google's underpinnings were open source. And yet, it may be that the capital costs, which are so immense, fundamentally changes how software is built. You and I were talking my own view of software programmers is that software programmers productivity will at least double. There are three or four software companies that are trying to do that. I've invested in all of them in this area. And they're all trying to make software programmers more productive. The most interesting one that I just met with is called Augment. And I always think of an individual programmer, and they said that's not our target. Our target are these 100 person software programming teams on millions of lines of code where nobody knows what's going on. Will they make money? I hope so. The current models take a year to train, roughly six months of preparation, six months of training, six months of fine tuning, so they're always out of date. Context window, you can feed what happened, like you can ask the questions about the Hamas Israel war, right, in a context that's very powerful. It becomes current like Google. And I think the text to action can be understood by just having a lot of sheer programmers, right? And I don't think we understand what happens, and this is, again, your area of expertise, what happens when everyone has their own programmer? And I'm not talking about turning on and off the lights. You know, I imagine, another example for some reason you don't like Google, so you say, build me a Google competitor, search the web, build a UI, make a good copy add generative AI in an interesting way, do it in 30 seconds, and see if it works. Right? So, a lot of people believe that the incumbents, including Google, are vulnerable to this kind of an attack. Now we'll see. So we talk about disruptors. Disrupting traditional players. Yeah, I think that's kind of the play he's making here It absolutely underpins everything he's saying because he keeps going back to that 30 second like build it in 30 seconds And see if it gets traction in the market. Like he obviously is scared for things disrupting google's market How can we stop AI from influencing public opinion, misinformation, especially during the upcoming election? What are the short and long term solutions? Most of the misinformation in this upcoming election and globally will be on social media. And the social media companies are not organized well enough to police it. If you look at TikTok, for example. Lots of, I like how he says not organized well enough. You mean not, not incented. That's right. To do anything about it. That's what he means and not regulated. Yeah. That's what he means. Yeah, that's what he means. Alright. Asian TikTok is favoring one kind of misinformation over another and there are many people who claim without proof that I'm aware of that the Chinese are forcing them to do it. I think we just, we have a mess here and the country's gonna have to learn critical thinking. That may be an impossible challenge for the us but the fact that somebody told you something does not mean that it's true. could go too far the other way. That, that there's things that really are true and nobody believes, some people call it epistemological crisis. I think we have a trust problem in our society. Democracies can fail, and I think that the greatest threat to democracy is misinformation because we're gonna get really good at it. When I managed YouTube. The biggest problems we had on YouTube were that people would upload false videos and people would die as a result. And we had a no death policy, shocking First of all, he never ran YouTube like that. That was another team, another group, like he was the CEO till 2011 and the alphabet president from 2011 to 2017. So he had a governance type of interaction. But he wasn't the CEO of YouTube. The, the CEO of YouTube was Susan Wojcicki, who joined Google in 1999, and then she stepped down as the CEO of YouTube, February, 2023. So overlap those dates. He became CEO in 2001. She was working at Google in 1999, and she stepped down in 2023. He stepped down from Google in 2011, right? And he stepped down as president of alphabet in 2017, which is six years before she stepped down from YouTube. so his claims about how much influence he had over YouTube let's take that with a grain of salt It was just horrendous to try to address this. This is before generative AI. And my conclusion is the CEOs, in general, are maximizing revenue. To maximize revenue, you maximize engagement. To maximize engagement, you maximize outrage. The algorithms choose outrage because that generates more revenue. Right? Therefore, there's a bias to favor crazy stuff. And on all sides. I'm not making a partisan statement here. That's a problem that's got to get addressed in a democracy And my solution to tiktok we talked about this earlier privately is there was when I was a boy There was something called the equal time rule because tiktok is really not social media. It's really television, right? There's a programmer making you the numbers by the way are 90 minutes a day 200 Right So the government is not going to do the time roll, but it's the right thing to do. Some form of balance that is required. All right, let's take some more questions. I think it was released in the New York Times soon, or the day after, using their works for training, where do you think that's going to go? I used to do a lot of work on the music licensing stuff. What I learned was that in the 60s, there was a series of lawsuits that resulted in an agreement Where you get as a stipulated royalty, whenever your song is played, even, even they don't even know who you are. So paid into a bank, my guess is it'll be the same thing. There'll be lots of lawsuits and there'll be some kind of stipulated agreement, which will just say you have to pay X percent. That's very interesting. Yeah. What he just said, it will seem very old to you, but I think that's how it will. When, when an AI model that is trained off of art that you produced, Is used to generate something you will get royalties. That's very interesting. Hard enough with art to track that. How do you track the original author? I mean, the machine learning today is a black box. Because we've constructed it as a black box. We said, we don't really care what happens in the middle of this. Just give us the output. But it could be constructed in a way where it can reference because I've been trained with this, I think that this is the image you're looking for. It could be, it could be trained to output that references. Perplexity is a, is a, is a AI solution that gives you references of, Hey these are the things that lead me to believe. This thing is true all AI could be that way. And if it could be that way, it leads me to believe you always could have paid the artists that you stole all this data from or the users or whatever you could have always paid them. Like, maybe it's like 1 cent, 1, 1 penny. That's the music industry metaphor there. Right. But. My fear is perplexity potentially has this capability. Maybe DALLE doesn't. So there's different tools. In its current implementation where we just feed all the data in and we don't index the data going in because we don't care about it, right? Yes. Yes. I'm with you. Yes, completely true. But in a future state where you're like, you know what, We need to care about it. Because we have to pay those people that all the people we stole their data for. I'm sorry we'll call it trained our model on their data. We won't say stolen. If you know that you need to reference the things that you learned off of when you're producing a product at the other end, you can track it. You can definitely do it. It's I'm not doubting the capability. It's just that you'd have to make these. Models do that and people that make the model right deliberately do that. And what does that mean? That means regulating them? Forcing function of making the companies That produces models, be able to tell you, incite their works. I don't think that happens without regulation. Yeah. Right now. Will regulation actually bubble up to the surface and will become a real thing? I would like for him to expand cause I mean he is at a point where he controls a lot of the dollars that go into legislation and lobbying for this kind of stuff. So I would like him to expand on like, Hey, this is where I'm putting my money and I think we should pay these independent artists so that we can keep them afloat so that we can keep producing better models. Because if the stream of data going into the models ceases. And starts becoming just like random AI created garbage being absorbed and sucked up into further and further models. The models will become garbage over time. So, I would expect that he would be, as a person who controls the purse strings, he would be saying, Oh, no, no, no, we need to funnel more money back to the people. Creating original thought that feeds these models. So I would think that that's where he's going. I'm going to assume that like what we just heard that triggered this small discussion, I'm going to assume that that's the end of it. Unfortunately, he didn't mention that we, America is behind. The Europeans when it comes to legislating and regulating. Well actually I think he just uses Europeans as a stalling point, as a roadblock. He's saying the legislation they're putting in place is slowing us down. There's a few players that are dominating AI, right, and they'll continue to dominate. And they seem to overlap with the large companies that all the antitrust regulations kind of focus on. How do you see those two trends kind of Yeah do you see regulators breaking up Yeah, talk to me about antitrust, Eric Schmidt. Yeah, so in my career, I helped Microsoft get broken up, and it wasn't broken up. And I fought, fought for Google to not be broken up, and it's not been broken up. So it sure looks to me like the trend is not to be broken up. As long as the companies avoid being John D. Rockefeller, the senior. I studied this, look it up, that's how antitrust law came. I don't think the governments will act. The reason you're seeing these large companies dominate is who has the capital to build these data centers, right? Have Reed talk to you about the decision that they made to take inflection and essentially piece part it into Microsoft. Basically they decided they couldn't raise the tens of billions of dollars. Is that number public that you mentioned earlier? No.. Should we do one more question. Yeah, go ahead. I was wondering where all this is going to lead countries who are non participants in development of frontier models and access to compute, for example. The rich get richer, and the poor do the best they can. They'll have to, the fact of the matter is, this is a rich country's game, right? Huge capital, lots of technically strong people, strong government support, right? There are two examples. There are lots of other countries that have all sorts of problems, they don't have those resources, they'll have to find a partner, they'll have to join with somebody else, something like that. It's like, Hey, there are two players, two great power competition players in this market. I'll let you guess who they are and it's whoever outspends each other. Is gonna win that that's kind of his I don't believe it for reasons that I've read widely on the subject just in the news and so forth And I know there are other countries that are coming up leaps and bounds there's a lot in here to dig into. Even listening to this is annoying The two competitors because they can outspend each other. That's not true at all There are open source models out there that are really really good Really good. Yeah. Yeah, and if you're going to train the open source models to your particular Particular Subject matter or thing that you care about. You can get really dialed in. Let's listen to the rest of the what are you recording the rest of this? Yeah. I do you have any advice for folks here as they're building their regular business plans for this class or policy proposals or research proposals I am struck by the speed with which you can build demonstrations of new ideas. in one of the hackathons I did, the winning team, the command was, fly the drone between two towers, and it was given a virtual drone space. And it figured out how to fly the drone, what the word between meant, generated the code in Python, and flew the drone in the simulator through the tower. it would have taken a week or two from good professional programmers to do that the ability to prototype quickly, really part of the problem with being an entrepreneur is everything happens faster. Well now, if you can't get your prototype built in a day using these various tools, you need to think about that, right? Because that's who your competitor is doing. So I guess my biggest advice is when you start thinking about a company, it's fine to write a business plan. In fact, you should ask the computer to write your business plan for you. As long as it's legal, as long as it's legal. I can talk about that after you leave this. And, but I think it's very important to prototype your idea using these tools as quickly as you can. Because you can be sure there's another person doing exactly that same thing in another company, in another university, in a place that you've never been. Alright, well thank you all. I'm going to rush off. Alright. So the Eric Schmidt video I don't think there was anything in there that warranted it getting taken down, to be honest. He did express some, ridiculous opinions about working from home things about Google stealing Code he didn't really go into penalty land for me. He came close, right? Maybe for some people, he misunderstands some things. and some things he just has a pretty fixed mindset about, but like taking it down and trying not to have people watch that video. I wonder if anybody expressed any reasoning as to why they took it down. They don't have to, but they don't have to, they can just say copyright and take it down. Yeah. Yeah. I mean, also there's a lot of like, The silent part said out loud part, you know what I mean?, Hey, just, just like if tik tock is banned and the people lobbying the government to get it banned happen to be, I don't know, me maybe you guys should just take their business model and fill in the gap when that void becomes like that kind of stuff. You know. I think that's right. That those kinds of things made somebody uncomfortable, probably. Yeah. I mean, I could see it, , like stand by your man, like stand by your ideas, like if that's your ideas Hey, Google doesn't really invent anything out of thin air. We just kind of like do things better that people are already doing stand up for yourself is what I'm saying. I mean, we're net, I think. Yeah. Or something. Yeah. Well, I mean, I don't know if we like change hearts and minds with this pocket, but I think it's important. I think it's important for you to understand that you, the listener. To understand other people's opinions. That's part of why we're doing the arguing agile podcast in the first place. Right. Yeah. And you know, I, I think if you watch this and you find yourself thinking through some of the issues that we commented on, then we've achieved our objective, basically, we just want you to think about this, right? Yeah, I mean, we beat him up in a few little areas, if he were sitting in the room, he probably would be okay getting beat up. I'm pretty sure he probably got beat up enough in the boardroom all right. Well, you know some of the things he said were out and out his opinions like what's a terrible language, et cetera. I mean, it's just like your opinion, man. Yeah? Well that's just like your opinion, man. But it is what it is it's one man and yeah, so Let us know what you think about this. This was different than our usual podcast. We just went into a Different modus operandi. So let us know what you think like and subscribe down below