Dave's Podcast Playground
This is a podcast that changed it's description for Dave's Native App Testing things.
Dave's Podcast Playground
AI Frontiers - Dithering
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Can you imagine a world where AI development is driven by speed, iteration, and open source opportunities? In today's episode, we dive into the exhilarating realm of AI and discuss Facebook's recent release of their Lama language model. We analyze how the leaked internal Google document highlights the need for open source and brings up questions about the lack of a moat in the AI space.
But that's not all! We also unravel the intriguing and somewhat ironic relationship between tech giants Apple and Facebook. Discover how these two powerhouses benefit from their symbiotic yet antagonistic connection, and how Apple's open source model plays a significant role in the success of companies like Instagram and Facebook. Don't miss this thought-provoking conversation as we explore the rapidly changing landscape of AI and open source!
John, I haven't been this excited for a dithering in a while. Not because of the topic All our topics are interesting but you just gave such a great rant about your rental car experience that I know you're already fired up, So I just got to direct this fire in the right direction.
Speaker 2Of all the times to rent a bad car. it was for a Philly to Boston, Boston to Philly return trip with my son from freshman year, So it was the longest drive. We're at the worst rental car.
Speaker 1A long drive in a bad rental car is truly the worst.
Speaker 2I'm thinking about changing my entire blog from a site that is mostly about Apple to mostly just shitting on Kia Motors coming.
Speaker 1We don't have time for the whole rant, but one of the funny bits is you could not figure out how to open the trunk, and you felt validated when you went to the Four Seasons and the Valets couldn't open it either. And what did you call them? Well, they're professional trunk openers.
Speaker 2And then my wife comes around to the passenger side, says the guy wants you to open the trunk And there is no button, number one. There's no button inside the car to open it. It's as though they hid the button for how to open the trunk. Anyway it was.
Speaker 1It was a great rant that does not fit in 15 minutes. Everyone that loves the 15 minute format. It is your fault. You're not going to hear the whole story, but a number of developments.
Speaker 1This week Apple's earnings just came out. I had a chance to go through them a bit, but I think there's actually an angle here. One of the things I talk about Apple is one of the big things they need to hope for as far as AI, i think is that it becomes a real sort of open source phenomenon. That's going to play well to their strengths. They have great devices, they have hardware that could potentially do this stuff locally, and this came up first in the context of Facebook, where Mark Zuckerberg spent a lot of time talking about open source and he didn't commit to anything as far as these models, you know.
Speaker 1But it's clear that it's sort of top of mind, as it should be, because the one of the biggest things that's happened in the last three months was the Lama large language model from Facebook, which was released to researchers under a non-commercial sort of license and that was promptly, promptly, leaked. What I'm bit torrent was the just the speed of sort of innovation and optimization that happens sort of immediately, and it's definitely one of those classic sort of commoditize your compliments sort of opportunities, and then a way it on top was this really fascinating internal Google document that got leaked, basically saying, yeah, we have a big, we have an open source problem here, or slash opportunity, and I think this is a huge deal, and it's sort of been the week for this stuff to sort of come up.
Speaker 2Well, i think the key thing in this leak document was the headline, really, which, summer? it's the rare headline that I think aptly summarizes we have no moat, and neither does open AI. I'm not sure that was the internal headline, but it was the headline of the newsletter that leaked it And that's the gist of it, right Is you know? let's say, let's just concede that Bard, with a few weeks or months time, will be on par with open AI.
Speaker 1Yeah, it has gotten a fair number of rooms for the last couple of weeks. So, it is iterating fairly quickly.
Speaker 2And we do. we know already that these things, once they leave the internal confines and get exposed to the broader world, do improve quickly through what you think, because all of a sudden, with tens of thousands or more people hacking at them and throwing crazy things at them, they do improve quickly. So let's just concede that they're on equal footing technically. I think we've seen this story. This is one of those times where I feel like we can zoom out to the history of the computer industry right Where, when breakthroughs happen, sometimes the first or second company that jumps out to an early lead becomes a long standing Titan, And sometimes they're a flash in the pan And it's. I think it's very hard in the moment to tell. That's where experts like me and you come in.
Speaker 1Yeah, you just summed up a bunch of really interesting threads. Number one One of the things that is very interesting in this document is sort of the admission that of course, clearly spending tons of money to build a super sort of complex, tons of data, tons of parameters, tons of weight sort of model, you're going to get a better model but they don't iterate well, Like it's hard to sort of make improvements and sort of the way it's worked as long as these models have been internal is okay, we've learned a bunch of stuff. Now let's train a new model and you're just starting from scratch. And sort of one of the admissions in this document And again, I don't know if this is Google's position broadly, it's just someone from internally, but it does sort of ring true to your point is actually iterative.
Speaker 1Speed is super important And to have a model that you can easily iterate on and can stack improvements where you're not starting from scratch every single time, just in the long run, is probably going to be a better thing to do. Like there's a super important vector of quality I don't know quality is the right word, but which is speed? Like speed really matters when it comes to sort of product quality, speed of iteration, because you get that feedback and particularly with AI, where you have people interacting with it, and you get basically at scale, internet scale, feedback on what's working and what's not, and you want to be able to incorporate that into throw it all away just to train a larger model that costs hundreds of millions of dollars, or tens of millions of dollars, doesn't seem. Is that really going to be the way that things go going forward? It seems doubtful.
Speaker 2Yeah. So is there a moat or not a moat? That's the big question. And if there is, maybe open AI is first and maybe they are amazing, i guess, to me, i think, clearly the bigger question, this LLM breakthrough moment, clearly that is transformative, i think. I'm not sure I think there. Well, i guess there's always some naysayers. I think most people agree a shocking majority of people observing agree that this is a breakthrough for the whole industry. And again, game changer is a phrase that's over used, but everything's going to change The contrast to like, like crypto or whatever is like.
Speaker 1There's super compelling demos and use cases right now. Right Like you don't have to like theorize about sort of what it might be.
Speaker 2But is the breakthrough the abstract notion of LLMs and how they work, or is it open AI's particular expertise and genius of their collection of researchers and the head start that they have is something nobody else is going to be cat, no one else, maybe other than Google, is possibly going to be able to catch up on. Or is this just like the Unix like operating system where, like son had this, you know, this commercial Unix type thing and servers you know, and then all of a sudden, no, that that's going to be commoditized. You know it might take years but it's going to be just completely commoditized, and son, in fact, had no moat whatsoever, it was the it's a huge.
Speaker 1It's a huge question. I mean like again, not everyone agrees with this like, are small models going to be good enough relative to big models, right, as you know, can I think there's a question here. Can these ever be good enough, or are the big models going to be so much better and keep improving that, like? I mean, like, can you ever be smart enough? right, i think there is a real question here. But there are a couple of points to note.
Speaker 1Number one Dali was like this huge thing a year ago and now Dali is like a distant third right like mid-journey, is the sort of still closed sort of one that is, everyone seems to feel kind of the most advanced as far as image generation goes. But stable diffusion, the open source one, that's the one they got tons of iteration, tons of optimization and that's one that's built into a bunch of products. Like there are stable diffusion products out there now we talked about in the context like when's a sort of a little bit ago, but that and that's the one, that apple, you know that apple sort of like optimized around and that's in that space. It already seems to be the case, that open source one now for images, the size of the model definitely does not need to be nearly as big, which is kind of paradoxical, but it makes sense.
Speaker 2If you kind of think about it, right, like whereas large language, like a picture, is worth a thousand words, is the phrase come back to you which means, uh, you need a lot of words to sort of do a similar thing and again I and I think that's what the google internal paper is pointing out that in, in, in a very short matter of of weeks I mean not even months, but weeks there's, there, was enormous proof that much smaller models can be competitive and in some targeted areas like like, a much smaller model but with high quality, curated data, can actually outperform these massive models.
Speaker 2And if there was a moat, the massiveness of the open ai models and presumably google's models I forget what they call theirs that's the moat, that it's. It takes a billion dollar data centers of custom design to even run them, but I think the proof is already out that you don't need that sort of scale even now, let alone as hardware evolves over years. You know something that's a moat for only two or three years does not make open ai a trillion dollar company right well, i think so.
Speaker 1This is where I think what open ai and I've written this before and again that definitely sees the case for images, for large language models. It's still very much sort of an open question, but this is why I wrote this a couple weeks ago. To me, open ai is big opportunity and what they need to lean into is chat, gpt, owning the, a dominant or very large consumer interface like, and that's sort of like that is the story of tech or that's the irrigation theory. That's sort of what I've been writing about for years is, if you own the point at which customers interact, that gives you sort of downstream, that is a moat like where customers are used to. The way I interface with these things and the way I get stuff done is through this particular interface, the.
Speaker 1It's very hard for another company to come along and say, oh, our model is sort of slightly better, right, because they're already accustomed to, like habit and particularly as it expands, and they have these plug-in infrastructure and all that sort of stuff. That's where I thought they should go sort of all along. I think this sort of leans into it. But the other challenge is to the extent open source becomes compelling, and I think this is the apple angle here. Apple, in my estimation, should be praying. This is the case that open source actually does win, because that's something they can incorporate. They can build it into their operating systems, into their phones, they can tune it for their, their processors and developers can sort of leverage it and that would be, you know again, commoditize your compliments. Same thing with facebook. The reason why is facebook sort of ahead in this space?
Speaker 2because they have an obvious place to manifest Sort of AI which is sort of in their networks, in their products and again, going back historically, one thing that's been proven and it pains me because I love software so much but ultimately the value of pure software is negligible In general because open source versions of anything come along and reduce the value and if you want to have a sustainable advantage, you need something else. So for Apple, it's obviously been the hardware right, and hardware cannot be open sourced. Or you can make open source designs but you can't Go pick up open source free MacBooks or iPhones, or iPhones, i guess, is the better example for talking about The results. But, and for Facebook, it's data right. It's what the information they know about three billion users around the planet so that they can freely open source all sorts of stuff you know and react.
Speaker 2In this I mean, they've released gobs and gobs of much used outside Facebook open source Over the years and might do it again with open AI, and they can do this without any consequence to their business because they're building products around the, around the AI, that are based on the data they have, which is what they know about three billion people right, that nobody else has. They don't have to worry about somebody else coming along. Yeah, doing that well, and you get again. It's one part of the open source mantra that really does work it if, if, opening it up to the community and you get the expert community people to pitch in and it improves the open, it, the open source thing Facebook benefits from that.
Speaker 1I think the big question probably in their mind is these models are still really expensive, right, and the models are also effective because they are using Facebook data, right, facebook has a lot of data and can they make better models because of that? And is there a risk, if they do go in an open source direction, that they are giving away too much in this particular case? right, that was that was the earnings call was so interesting. You could kind of see Mark Zuckerberg thinking out loud about this, where he's like I know this is good, i know he done this a lot. Is this the right thing to do? like he wasn't saying that explicitly but it's sort of implicit in the why this? why quoted so much of his earnings call it in my update, because you can see them like.
Facebook and Apple's Irony
Speaker 1It's like it's really tempting to sort of go this route, but it's like man, it like that's. It's a big, we spent a lot to get to where we are. Is that? do we want to give that away? and Particularly when one of the biggest beneficiaries of them give you away would be Apple. Like the irony of the Facebook, apple sort of symbiotic, yet hate each other. Relationship strikes again.
Speaker 2Yeah, you say I was about to say symbiotic, but hate each other. They're like siblings who are, who, like Somehow are profiting at the same time, but hate each other, yeah, and and do not want to acknowledge that the other one, the other one Benefit from each other.
Speaker 1Yes, at all. I mean, that's the way the Apple built in this sort of capability or like, say, there was some sort of open source model that was shipped as part of the operating system and, on the iPhone, is finally tuned to the thing. Guess what company is going to best leverage. That is going to have like like manifest that in amazing way, it's going to be Instagram. It's going to be Facebook, right, like it really is funny how that works out. Yeah.
Speaker 2All right, all right.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Podnews Daily - podcast industry news
Podnews LLC
How to Start a Podcast
Buzzsprout
Buzzcast
Buzzsprout