Preparing for AI: The AI Podcast for Everybody

VOICE CLONING, ARTISTS AGAINST AI & AI JESUS: Jimmy and Matt debate their favourite AI stories from November 2024

Matt Cartwright & Jimmy Rhodes Season 2 Episode 24

Send us a text

It's monthly round up time! Unlock the secrets of AI's transformative power in our latest episode, where we tackle everything from cutting-edge voice cloning to AI's playful antics. Imagine a world where digital interactions are indistinguishable from human conversations—11 Labs is making this a reality with their breakthrough in voice cloning and text-to-speech tech. We'll also revisit HeyGen and see how avatar creation has evolved. But it's not just about the tech; we dive into the controversies and ethical dilemmas that come with these advancements, especially concerning data privacy in the arts.

Amid the digital revolution, artists are up in arms about the use of their creations by big tech companies. We shed light on how the artistic community is rallying to protect their work from being exploited by AI tools. Is the future of unique art at stake? Meanwhile, in Switzerland, an AI Jesus is stirring discussions about technology blending with spirituality, sparking debates on the role of AI in faith-based settings. This episode is laden with stories that explore AI’s controversial yet fascinating journey across different realms.

Join us as we navigate through the competitive landscape of AI business models, explore its medical innovations, and question the reliability of AI's citations. Plus, meet an AI granny who hilariously handles phishing calls—proving AI isn't just about business and ethics, but also about having a bit of fun. From detecting deepfakes to solving CAPTCHA puzzles creatively, we cover the technological, ethical, and whimsical facets of AI, offering you insights into a rapidly evolving domain that’s becoming increasingly integral to our lives.

Speaker 1:

Welcome to Preparing for AI, the AI podcast for everybody. With your hosts, Jimmy Rhodes and me, Matt Cartwright, we explore the human and social impacts of AI, looking at the impact on jobs, AI and sustainability and, most importantly, the urgent need for safe development of AI governance and alignment.

Speaker 2:

Don't you want me like I want you, baby, don't you need me like I need you now? Sleep tomorrow, but tonight go crazy. All you got to do is just meet me at the Welcome to Preparing for AI, the AI podcast for everybody, with me, barry Venison and me, alan Lamb, and we're back with our most popular series, the monthly roundup. So we're going to get straight off with it. So I'm going to hand over to Jimmy to start off with our first item, which is going to be examples of actual useful AIs.

Speaker 3:

Yeah, so I came up with this because, I mean, obviously we did an episode on this not that long ago, but we focused on large language models. I feel I've been getting quite a lot of advertisements for AI products and I feel like this is a relatively new thing. Now I appreciate I do do a lot of AI-related searches on Google, so I'm obviously going to get AI product placement stuff, but I also haven't seen it that much in the past. You know it's usually just better help and stuff like that. Um, which I'm obviously greatly in need of as well. Um, but yeah, the. So I've been getting adverts specifically for google workspace and for um, for what's the one 11 labs, um, which we did talk about.

Speaker 2:

You run an ai podcast. Do you understand how algorithms work?

Speaker 3:

I do, but I but I feel like these, I feel like. So what I'm trying to say is I feel like we're getting to the point now where, rather than kind of unpolished alpha versions of stuff- yeah, we're getting with novel stuff, frivolous, stuff, frivolous stuff.

Speaker 3:

We're actually starting to get to the point where companies are releasing sort of finished products and therefore they've started advertising them on YouTube, google, that kind of thing, and I think 11 Labs is a really good example. So 11 Labs I would put this in definitely in the category of business application. You can use it for personal use, but it's more for fun, but very much like we did an interview with Cerberus not long ago, with 11 Labs, now they have conversational AI, which has just been added within the last couple of weeks, and this is like a bit of a game changer in my opinion. So just to go back a bit to explain Eleven Labs as a company, they focus on AI, voice to text and text to voice.

Speaker 3:

So basically it's a website where you can you can clone your own voice. You can select from a range of pre made voices, so to speak. You can get them to do text to speech, and it's really good, like it's. I mean, we're talking like pro, pro level stuff, uh, and you can also do it um text, um the. The game changer, as I say, that's just come out in the last couple of weeks, is now they've introduced conversational ai and so I've actually cloned my voice. I've cloned your voice as well, um and you can now create within within 11 labs just clarify when you said you're, that's me, that's not the listener that's not because everyone listening thinks you've cloned their voice um.

Speaker 3:

I have not cloned any listeners disclaimer no disclaimer I have not cloned anyone's voices that I don't have permission for, although I don't think matt gave you you haven't got permission for mine, but I'm gonna give it you now well, we've got hours giving it verbally and it's recorded on this podcast, so yeah, I used our hours of quality podcast content to create it.

Speaker 3:

So, yeah, look out soon for an episode created just of AI Jimmy and AI Matt, because what you can also do, so you can give these conversational AIs, you can give them knowledge, so you give them documents which build up their knowledge base. So I gave one transcripts from the podcast, for example, and then you can also create a persona for them. And another great use of AI is to use something like Claude or ChatGPT to actually create the system prompt for the model. So this all sort of requires very little effort on my part, to be honest. But, yeah, I've created a few models.

Speaker 3:

I've cloned my own voice. It's quite impressive stuff and it feels like this is one of those things where, like I say, 11 labs already from a range of large language models to use in the background to form responses, text when you talk. You've got another one doing text to voice when it talks back and obviously it's running off some kind of large language model in the background to decide what to say back, and then it's using your system prompt. So it's a whole bunch of stuff coming together.

Speaker 2:

Um, it's really quite impressive in my opinion there's another example I have which we talked about a while ago. When we talked about it we said it was rubbish, but we could foresee you know where it was going to get to, and I think it's now kind of at that point. Is, um, hey gen? So hey gen, is it creates a kind of avatar? So originally when we tried using it, um, I think there were maybe one or maybe three or four different avatars you could use and it will then. So you put in text and the avatar will then do the speech. Um, it was a bit clunky, like there was arms kind of moving. It's not perfect now, but you can now get a picture and it will create the avatar from that picture. But this really surprised me because I thought a lot of blocks would be in place to stop this kind of thing. But you could take a photo from your photo library and it would create an avatar with that. Um, I don't know.

Speaker 2:

There's obviously, you know, watermarks and stuff. There's obviously security stuff in there, but it's already evolved to a place where there's a lot of flexibility and creativity in what you can do and the sort of the idea that when I first saw it marketed was it was quite a commercial use was, you know, creating marketing videos and stuff like that which I think it would be great for. There's marketing videos and stuff like that which I think it would be great for. There's obviously the deep fake. You know the potential for deep fakes.

Speaker 2:

Like I say, I haven't actually used the newest version, but I'm 100 certain that there will be things in that are to stop abuse of it at the moment, whether people can hack in and, and you know, use it in in. I'm sure there are uses that you could use it for which are obviously dodgy or not intended for. But I think it's an example of very much like 11 labs, of, of something that has now kind of reached a point where it looks like it's becoming. They had to release it early on to get it ahead in the market. I guess one for investment too, because you need to be the the kind of mark you know you need to do the first early adopters need to see the first one to kind of jump on that bandwagon. But I do think it's now at the point where, yeah, it looks like it's legit and it looks like it's kind of something that you can actually use professionally now which which is.

Speaker 2:

You know, six months ago such a short time, but we were not in that space, but but that's the sort of example I had. Yeah, I agree with you, we're getting to the point where these things are no longer necessarily kind of frivolous uses. They're finding some actual uses which they need because otherwise they're not going to make any money out of them.

Speaker 3:

Yeah, there's so many business cases for the stuff we're talking about. So educational training videos, like you know, professional training videos at work is a good example. You know the conversational AI opens up a whole world Like you're. You know you've 11 labs are definitely going to make a load of money off call centers and um and any kind of business where you need to have customer interactions. Because this is like the next generation of that um, of that industry, where you've got, you know, your phone up your bank and initially, you're speaking.

Speaker 3:

The weird thing is, I think, speaking. The weird thing is I think I don't know if this has already happened, but you could easily be speaking to one of these agents and not realize you're not speaking to a person. I think we'll get used to it and I think you'll sort of be able to pick out some of the idiosyncrasies and stuff like that, but they're so good that I think it'll be difficult. One thing I would say on the, on the sort of um, it's not copyright but like kind of like you know, not copying people's voices that you'd have the permission to and not copying people's images. I think these platforms are touting themselves as just that.

Speaker 3:

As a platform, Eleven Labs has definitely has disclaimers that says you need to have permission to use this voice and clone this voice and that kind of thing. When you're talking about voice cloning, I'm assuming that HeyGenen has very similar terms and conditions. So if you get caught, you know, cloning trump's voice or something like that um, then you're probably going to get like kicked off the platform at some point. But I think they are very much trying to go down the road of we're a platform. We're not necessarily responsible for what people create with our platform. Let's see how that goes.

Speaker 2:

But it's interesting, isn't it, how we're at the point where it does. It feels like large language models are plateauing and I was going to talk about it on this episode. I thought you know it's something we've talked about before, so maybe not. But there's more and more evidence of the kind of plateauing in large language models and the need to try and find something else to kind of make the next breakthrough. But at the same time as that's happening, that shouldn't distract from the fact that, like you say, there are now these really amazing use cases, because the idea in the large language models is, you know, to get to artificial general intelligence. Well, actually, for useful things we don't need that. You could probably argue we don't need. We certainly don't need artificial superintelligence, but you could argue we don't need artificial super intelligence. But you could argue we don't need artificial general intelligence and that there's all of those kind of potential negatives and the kind of existential systemic threats that are there. They're not there with large language models.

Speaker 2:

There are threats but there are also like genuinely really good use cases and they're starting to come through now, which means I think next year maybe we'll start to see a lot more adoption in in sort of enterprise and jobs, but I think as well, I feel at this point that it is currently going to work hand in hand with people like the call center examples may be not a good one, but there's a lot of things where you know there isn't enough trust. They're not 100. There's going to still be people in the chain. You are going to see productivity gains longer term yeah, I'm not so confident. But shorter term, I think next year we might see, you know, genuinely sort of progress and improvements in productivity that are not necessarily at the loss of, you know, massive job cuts there. You know some people are going to lose their jobs, but it might not. Might not be as much as we maybe feared six, seven, eight months ago yeah, I, I feel so in this respect.

Speaker 3:

I mean, I still think with call centres, you know, these 11 labs type AIs, they're still not going to be perfect, they're not going to answer every question, so you're still going to need humans.

Speaker 2:

You're not going to need someone in a call centre. You're going to be able to answer every question. So that's the thing of working together, isn't it Some? Isn't it Some things? The AI is good for some things needs the person.

Speaker 3:

Yeah, but presumably the way these things will work is, you know, okay, I need to speak to a human now and then actually you'll go through to a human, but presumably these AIs are going to be able to answer, you know, 50, 60, 70, 80, 90% of questions.

Speaker 2:

The human will just be a different AI that someone believes gone through the AI and they've now gone to the human, but it's just a slightly. It's like. It's like version 2.0 of the same of the same the same avatar, but I think, I think the web is a.

Speaker 3:

We've talked about it before. The web, I think, is a really good analogy for this. Um, obviously we're on a massively accelerated time scale compared to the web, but the web, the worldwide web, in terms of the platform, actually existed, like in the 70s and the first real version of the actual World Wide Web from the 1980s, and it took until, you know, late 90s, early 2000s to have, you know, to have real kind of business applications for it. I think we're on a massively accelerated timeframe compared to that with AI, because of the experience that we've gone through with the web before. But we've talked about it before on the podcast. Um, despite the fact you're right, like ai, like llms, in terms of the plateauing, in terms of how smart they're going to get is definitely happening. However, now we're starting to see these business applications, applications okay.

Speaker 2:

The next one is um, it's something called well, I've called it artists against ai. So it's particularly a group which call themselves pr puppets, who kind of say that they were lured into providing unpaid labor through bug testing, feedback and experimental work. They've called that open ai for having complete control over what's presented to the public in in terms of this is in terms of kind of sora and creative tools. So saying that this early access program that they were working on appears to be less about creative expression and critique and more about pr and advertising. And then this group, who obviously have got a kind of issue with the way that so basically the issues with the way that, you know, training, data and people's work is then being used for free on these kind of creative models, they somehow leaked Sora.

Speaker 2:

So Sora is OpenAI's video creation tool which still hasn't properly been released. So I think it was about three hours, but everyone could apparently access that model for three hours before it was shut down by OpenAI. It also seems that the videos generated during that time have been taken down off Hugging Face, which is kind of the best way to use it. It's kind of like the server where you would be able to find it and the group have called to stop using proprietary tools like Sora and for people to use open source projects instead. So they're not asking people to not use ai, but they're asking them to not support open ai, which, um, you know something I can get behind, to be honest. Um, so yeah, they express support for using ai tools to create art, but they oppose gatekeeping or basically being kind of pr, and you know training corporations that are then going to make loads of money out of it.

Speaker 3:

Yeah, I mean, I'm in two minds on this one, like I think. I think the problem is everyone put their data on the internet and and we've been doing this for years, right, like everyone's been putting their art there, all their, everything they produce, been making it freely available on the internet, on platforms as well, particularly right.

Speaker 3:

Yeah, on platforms where you sign up to terms and conditions which basically sell say, we can sell your data and do what we want with it, and you know there are different versions of that but effectively the sort of cat's out of the bag really, in a way, and I think I feel like, on the one hand, you probably haven't got a leg to stand on, but on the other hand, no one knew what commercial applications this stuff was going to be used for in the future. I mean, interestingly, I don't know whether the same groups of people are upset that google's using their data for advertising purposes and all this kind of stuff. Very possibly, but I genuinely feel like the cat was out of the bag with the internet and with, you know, providing Google, facebook, you know all these big tech companies with all of your information and data freely and then, effectively, all they've done is used it exactly as they said they were going to don't disagree with you.

Speaker 2:

My reason for for bringing this one up as a as a point of sort of, you know, one of my monthly most interesting things this month is less about that, actually, but it's about the fact that I see this as a backlash and I've talked from the very first episode about what I foresee. I'm using my predictions about the backlash and I've talked about riots on the streets, and I've talked about riots on the streets and stuff which I still think, if things go, you know, badly in terms of jobs and stuff will happen. But this, although on a minor scale, I mean I guess you could compare it to the kind of just stop oil right, throwing a, throwing some paint over a painting, um in in the kind of impact, I mean, it's not, you know, it's a few hours, it's not that much of an impact, but it for me, it just shows that, you know, artists and creatives are starting to really feel not just that they're angry about this, but that they, you know, they want to do something, and it always that kind of protest and stuff.

Speaker 2:

You know it always or often starts in kind of artistic communities those are people that have, you know they, they can think about expression, they care about these things.

Speaker 2:

So I just think it's really interesting that that this has started and when we're kind of talking about the positive use cases, but actually, and a lot of the surveys, that what people are most worried about ai or most bothered about at the moment is about the use of their data.

Speaker 2:

So it stands to reason that this would be the first thing that well, it's not the first thing. We've had things on on driverless cars and stuff, but this is the first example I've seen for a while of a kind of backlash and against a you know what is essentially a large language model powered tool. So we're starting to see small level backlash protests. I don't think this is going to, you know, move the needle much either way, but I think it's interesting that people are starting to actually do something. They're not just saying it, they're actually starting to sort of, you know, not with their feet, but I was going to say I guess, with their hands or with their brains. They're actually starting to act and they're actually starting to try and, you know, do something to kick back against it yeah, and I feel we've said we've said this before.

Speaker 3:

I feel like artists that produce unique pieces of art, like your Banksy's and Van Gogh type figures, they probably don't have anything to worry about, because high level art is still going to be valued.

Speaker 2:

Possibly more so if there are less artists, because it will be more unique. It's just no one will have any money to buy it. Yeah, totally Well some people will the richest 1%. They'll be buying all the art.

Speaker 3:

Yeah, it, yeah. Well, some people will the the richest one percent. They'll be buying all the art. Yeah, we'll probably be there. It'll be the robot from now, is it? It'll be the robots in the future?

Speaker 2:

me and you can buy van gogh's now and we won't be able to in the future.

Speaker 3:

Well, speak for yourself. I've got a few tucked away, um, but I think I think this actually, when you, when we talk about artists, obviously there's a huge community of artists that are not the kind of one percent. They're actually graphic designers, designers, people like that, and with that, like I, I it's actually, you know, it's actually quite sad, um that you know, because quite often people go into those people go into a lot of um careers because they're passionate about it, um, but especially in that kind of design and art space, um, and and now, those kinds of jobs, I'd imagine they're like seriously under threat. We've already had episodes where we've talked about people who've lost their jobs, who work in those industries, um, so yeah, in that respect, it's quite sad that now you can churn out commercial art as as quick as you like. So what have you got next for us, matt?

Speaker 2:

I think this is a good one. It's a controversial one. I think it's a good one. The ai jesus. So, the ai jesus, I don't know if you've heard. Well, you have heard about this because I told you before the podcast, but, um, for people online, I don't know if you've heard about this. Um, it's quite the intro. It is yeah, um, this is what's been called an innovative art installation at saint peter's chapel in lucerne. I think I pronounced that correctly. Lucerne in switzerland. Um, I think it's lucerne. Yeah, okay, should we go back at saint peter's chapel in lucerne, switzerland, which was designed to explore the intersection of technology and spirituality. Developed in collaboration with the Immersive Realities Research Lab at Lucerne University, this project features a holographic representation of Jesus that interacts with visitors, responding to questions about faith in the Bible in any one of a hundred languages.

Speaker 3:

That's yeah, Is it 11?

Speaker 2:

labs Possibly. So I'll be honest with you, I find I find this weird and from a kind of theological point of view I find it a little bit troubling because I think this idea of using ai to kind of impersonate jesus I not particularly comfortable with it. I think it's also weird because it because it's basically in a kind of confession situation. So people go in and the article is talking about how you've got Christians, you've got Muslims, you've got Jews, you've got atheists, all kinds of different people going in and using it. So I guess people are using it for fun, but the idea that you know you go to confession, you confess to a priest or you confess in your own time perhaps to Jesus, but you don't go into a physical space to confess to Jesus, it's a gimmick.

Speaker 3:

They've clearly chosen the title to be inflammatory. Well you haven't.

Speaker 2:

yeah, People haven't heard the title yet, have they? So do you want?

Speaker 3:

to read it. Is it not called AI?

Speaker 2:

No, the installation is called Deus in Machina. You mean um juice in machine, I mean Deus in Machina.

Speaker 3:

Yeah, deus in Machina. Yeah, yeah, um, what's it doing?

Speaker 2:

Well, it's provoking thought about the role of AI in religious context, Jimmy, Although it is situated in a confessional booth, as we said. But it is important to note that the AI does not hear confessions which I don't understand because it's literally listening to people's confessions or replace a priest, which is literally what it is doing, but rather it facilitates conversations for spiritual reflection. Well, that I can get behind, Because I said on the top 10 uses, I had a very kind of spiritual conversation with an AI, which was really useful, but that's why I think this is a bit odd and it's obviously gimmicky. So it's been trained on the New Testament and various theological texts so it can provide responses that are aligned with church teachings.

Speaker 3:

Definitely should have gone Old Testament all the way.

Speaker 2:

Well, I'm surprised it wasn't trained on all of it. I mean, it's not. You know, if you think about the kind of context windows, I'm sure you could train one on the old and new testaments probably, I think if it's using chat gpt in the background, I think chat gpt wouldn't fit with the guardrails.

Speaker 3:

Probably the old testament's pretty brutal.

Speaker 2:

Yes it is, um, yeah, like I say, I, I think the thing itself is interesting. It's obviously a gimmick I mean, it's obviously not there to replace sort of confession, but I think that the reason it's a gimmick is because they're using Jesus. Like I say, from a kind of purely theological Christian point of view, I think it's slightly troubling From the point of view of having AI, that people can go to and be spiritual and have those conversations. Yeah, I think it's probably a good thing if it would encourage more people to find spirituality, to find some kind of peace, to find god. I wouldn't have a problem with it.

Speaker 2:

I think the idea of them using jesus, rather than it just being a kind of ai confession booth of a spiritual conversation, that's the bit where I think not only is it kind of gimmicky, but it does, I think it would. It would be challenging, I think, for a lot of people of faith. Um, you know, and if, if we had an ai version of other um, religious figures I'm not going to mention, but I think there were there were particular regions where would have a real problem with that kind of impersonation and it absolutely would not be okay to do it. So it's interesting, but it's like anything that is going to interfere with religion. I think it's very controversial and sort of AI and religion something we talked very briefly about but I think it's. I think it's something to watch because I think where those two intersect will be incredibly controversial.

Speaker 3:

Has this been done with? In collaboration with the Catholic church, though Like I get the impression it has no done within collaboration with the catholic church, though like I get the impression it has. No, I don't think it has, oh you don't think from what I read, it hasn't.

Speaker 2:

And I, I, I can't see. I mean the catholic. You know, the catholic church has many, like any church has many levels. It obviously had worked with the particular um chapel in in lucerne, because yeah, that's what I meant they, they. You know they'd agreed for it to be there, but but I can't imagine the Catholic Church as an institution allowing it.

Speaker 3:

Yeah, I'm not saying the Pope signed it off or something, well yeah and is it like an AI Pope?

Speaker 2:

obviously wouldn't be okay. An. Ai, jesus would absolutely not be okay with any churches, I think.

Speaker 3:

Not in those terms. I mean, what I was going to say is, if it is in any way sort of in collaboration with the church, even if it's not the central body, then it's also really progressive and pretty cool. Like you know, I still think a significant amount of people don't really know what AI is all about. That's why we have our podcast and you know this is pretty cool. I bet a bunch of people will have seen this and interacted with it and been quite awed by it.

Speaker 2:

Yeah, yeah, again, I'm going to refer back to that in the top 10 news episode where I talked about you know it's not the same thing but having a kind of spiritual conversation and talking through sort of spiritual and religious things with an AI, and it was able to kind of listen and actually I said it quoted back to me kind of oh yeah, this sounds like something from a you know a particular bible passage. I think that's pretty cool. Where I think this kind of crosses a line, like I said, is the idea of an AI Jesus. So if you have a look at it, there's actually a kind of hologram of Jesus and, like I say, you don't go and do confession with with Jesus in a catholic church. Now you can have your own moment where you talk to Jesus wherever you want, in a church, at home, in whatever environment you want to.

Speaker 2:

But this idea of the confession thing is, if it was an AI priest, I think it would be not only not controversial but would also be, you know, it's definitely a really good thing because it does encourage more people to be spiritual, it's able to to kind of, you know, give feedback. But I think it's that idea of using jesus rather than using a priest. That I think you know. Like I said, you can only imagine if certain other religions were to do the same. Obviously that couldn't happen. I find it a bit odd that even this chapel has kind of allowed this with jesus. But yeah, that's just a personal opinion.

Speaker 2:

Yeah totally Right. Well, you often make fun of me, Jimmy, when I do my kind of governance and alignment stuff, so I'm going to let you take this one. I'm going to lie down, have a nap and, if you can wake me up when you're finished I'll join back with the podcast.

Speaker 3:

Yeah, so very exciting stuff. We've got the economics of ai. Um, yeah, so what I was going to talk about like a couple of months ago, actually it's not. It's not this month, but a couple of months ago open ai raised I think it was another 6.6 billion um from microsoft and they've now got a valuation. It's absolutely bon bonkers. It's like 147, 157 billion.

Speaker 3:

Sorry, and just to put it in context, like some of the fastest growing companies in recent history, like Google and Facebook, openai is now actually outstripping the growth that they had when they first set up and they were already like record breaking at the time in terms of their valuations. And again, bear in mind, open ai is not a publicly listed company, which unfortunately means you can't benefit from their kind of like. I think 1700 increase between um january 2023 and august 2023 and it's increased again since then because I think their, their valuation has gone up another four or five times since then. Um, and so, yeah, like into, like the exponential kind of valuation of open ai is absolutely crazy. Um, as I say now worth in theory, 157 billion, based on a company that actually is 49 owned, owned by Microsoft, but the rest of it is actually owned and governed by a still not for profit board, basically under the wise helmsmanship of Sam Altman.

Speaker 3:

Friend of the show. Yeah, friend of the show, but yeah, like so and and this, this company like they're valued at this crazy valuation they reckon they make about 10 billion, um at the moment in in revenue. Every year through their like 20 a month subscriptions, lose 300 trillion.

Speaker 2:

Oh it's crazy. Yeah, I mean just because when you said that, like just in in case people haven't kind of put it together, it's like they're obviously generating revenue, but the amount of money they're spending is like I don't know how many x of what they're generating at the moment well, absolutely, hence needing to raise this amount of capital, where they're worth 157 billion on paper, um, but they're making about 10 billion in revenue a year.

Speaker 3:

so you know, if they did nothing else, it had taken 15 years to get back to back to zero again, um. So, yeah, I don't know how much money they're spending, but it's a crazy, crazy amount of money training models, developing new models, all that kind of stuff. Now, that being said, um, you know, the reason why all this money's going into ai is because it's such a potential game changer. It's seen as something that can actually revolutionize almost everything, right? So, like revolutionized search, which Google is still dominant in, and then also revolutionized businesses in terms of being able to automate tasks, being able to, like, write emails, being able to automate a whole bunch of stuff that people do right now write emails, been able to automate a whole bunch of stuff that people do right now. One of the cool things about this is that, in the last couple of weeks, actually so Anthropic is another company that we talk about quite often on the show, and actually they've got a surprising amount of cash behind them.

Speaker 3:

So they've been backed by Amazon, so clearly you've got Microsoft OpenAI on one side, you got amazon so, claude, really are sorry, anthropic, really are the good guys right, yeah, the little guys, the little, the little train that could with, with little amazon supporting them in the background yeah, exactly.

Speaker 3:

So actually, you know, claude, we talk about claude a lot talk about um and it is brilliant anthropic and it is fantastic, feels much more natural to use. But actually they're no, they're not really a small fish, they actually have a. They've I think amazon's doubled down um and brought their investment up to eight billion from four billion, and I think that um anthropic are now worth a 40 billion valuation. So, even though it sounds, nothing, though, does it?

Speaker 2:

it's like the numbers that have been bandied around now, like you're like 40 billion valuation. So, even though it sounds nothing, though, does it? It's like the numbers that have been bandied around now, like you're like 40 billion, is that it? And then you think about what 40 billion you think about. You know, I just remember, during the, the sort of early days of the pandemic, talking to somebody who worked for british airways and them saying oh, it's okay because they've got a billion in cash, so it'll be okay. Obviously they haven't got this money in cash, but you know, a billion in cash being like everything will be fine because we've got a billion. We're talking in multiples of of billions now yeah, and I can't remember.

Speaker 3:

I mean, wasn't twitter's valuation when musk bought it? I think it was like 50 something billion. Right, sounds about right yeah, something like that.

Speaker 3:

So we're talking exactly, but yeah, so we're talking about anthropic here, which is a company that not a lot of people have heard of. I mean, if you listen to our podcast, you'll have heard of them, but I don't outside of like that, like in terms of company AI companies that are widely known, I doubt many people have heard of Anthropic, whereas obviously everybody's heard of Twitter and they nearly have the same kind of valuation as when Musk bought Twitter. So it just gives you an idea of how much investment is going into these AI companies.

Speaker 2:

And clearly I mean Anthropica are a great company and they're making great products but clearly you've got those tech and are like we should say like genuinely it's not perfect, but are doing more on safety than any of the other major labs. So you know they're putting their money where their mouth is in terms of what they say, that they're trying to be it's not perfect, unlike open ai, for example, like they're trying to do this in a as a reasonably kind of um ethical way I think.

Speaker 3:

So yeah, definitely, definitely agree on that. There, anthropic are big on ethics, and that's partly because the people who formed anthropic broke away from open ai for exactly that reason pretty much. So, yeah, it's just super interesting. Obviously, you've got Google Gemini, you've got Meta, who are taking a completely different approach and producing open source models which they're just completely open sourcing and I think, like, actually in terms of the approach, it's kind of a really interesting approach, because we've talked about open source and closed source on the podcast before. But I think, because the capabilities of these large language models is plateauing, actually like where's all that money going? That open AI are chucking at things, and so I don't know I really wonder whether they're just chucking cash down the toilet and if they don't get those killer business applications.

Speaker 2:

Yeah, I mean, let's be honest, I I think the money that's being thrown at it is on the basis that they're going to get to artificial general intelligence because the the commercial uses of it doesn't need to have, like that's the thing is. If you get there first, right, if you get there first, then you've you've got there first and you can never kind of take that away. I think with the business uses you're talking about, you know most of the things that you're talking about it. It won't really matter which model is in the background, right?

Speaker 2:

no whether it's a llama model, whether it's a um, whether it's anthropic, whether it's a mistral, you know one of the chinese models, qm from alibaba I don't think it will matter that much because it's it's creating the application.

Speaker 2:

Of course you need a frontier model, of course you need an advanced model, but you don't necessarily mean need the most advanced one. If you look at because I read a lot of stuff that Gary Marcus is writes about Gary Marcus is very much you know it's plateauing and his, his kind of fear and and actually we talked about in the Darren Asimoglu interview about the amount of investment and if it doesn't pay off. The problem is so much money has been invested in here. You know the idea is the only way the kind of world economy is going to be saved. Maybe it doesn't need saving, maybe it needs to crash and we can rebuild a slightly different world. But you know, if it is going to be crash and we can rebuild a slightly different world, but you know if it is going to be saved and we're going to go back to growth is like all the chips are in on this. So it doesn't necessarily have to be.

Speaker 3:

All the chips are in on it.

Speaker 2:

Yeah, literally, yeah, it doesn't, it doesn't need to be like a like a poke, like a wild poker game exactly, and it it needs to pay off and I, I don't know. I I feel like you. It's a hell of a lot of money to be putting in on one company. It's not the idea, it's the fact that it's going to be open ai and the gap. You know, everyone talks about things as it's a gpt4 model, it's a gpt5 model. What if someone leaps out ahead and gets out ahead of chat GPT and then? And then what? Because once they lose that and they're no longer the kind of market leader they're just like all the others, then it's interesting, isn't it?

Speaker 3:

I think I think part of this is marketing and been first to market. It's a bit like, I don't know, bitcoin or Google or things like that, where it's just like oh, like, I think open ai and chat sorry, I think chat gpt has become synonymous with ai in the same way as google became with search, so people literally started saying google it instead of search for it, and I think that's the same thing with open ai and chat gpt. Is that oh, chat g? Whenever you ask anyone about AI who doesn't really know much about it, chat GPT is, or chat GPTP is what a lot of people call. It comes out usually first, and people don't know about Gemini, they don't know about all these other AI products, they don't know that it's integrated into search now, I think, and so that's the kind of advantage they've had as first mover, and I feel like a lot of this valuation stuff is based on that.

Speaker 2:

But they don't like. You talk about this a lot, particularly in the last few months, about how it almost doesn't matter because if you take ChatGPT and people have got it on their phone, right, so everyone uses ChatGPT, the interface, and a lot of people pay $20 a month. We both. Well, I pay for Claude. I'm not sure if you still pay for Claude, but you know, regardless, we all pay for that kind of thing. They're not making money out of this. That's not how they're making money. This is like pocket money, right? This is just. It might pay for the coffee.

Speaker 2:

Yeah, it literally keep the lights on, but it's not where the money's gonna be generated from from. So if we say that, like, where is chat, gpt or open ai's advantage in terms of that kind of commercial making money thing, is it because and we've talked about this quite a bit meta and google and actually microsoft, you know which maybe is the way, but microsoft are also doing their own thing. They've got the infrastructure and so it it's much, much easier for them. Like, where do OpenAI make their money? I still don't really understand where they're going to make their money and I'm not saying they won't, but I don't see an easy pathway to it in a way that if Google cracks it and gets ahead, well, it's simple. Like, then they've got everything, it's all integrated. Same with Meta.

Speaker 3:

Meta are going a different path because they're saying we don't need to make money out of the AI because the AI will help us make money out of everything else. So theirs again is in many ways a simpler model, anthropic, and OpenAI need to make it through, you know, through their large language models. That's more difficult. Yeah, personally I think it's business applications. Um, so open ai obviously have a link with microsoft. I think that. I'm not 100 sure, but I'm pretty sure that co-pilot uses open ai and so then all of the sort of revenue from co-pilot, which is a business application, which is, you know, like that's gonna, that is gonna fly, and I think that's based off OpenAI's models.

Speaker 2:

Did you know that if OpenAI get artificial general intelligence, then the whole thing's off? Or basically Microsoft don't get any revenue.

Speaker 3:

From the AGI stuff. From the AGI stuff, specifically from the AGI stuff. I find that crazy. I mean, that's a I don't know who signed up to that agreement, but it sounds crazy Because if they, and maybe, maybe Microsoft didn't actually believe they'd get to AGI, which is like maybe behind all this, like we don't. But that's crazy, that's nuts because, because that's where the that's the golden goose right.

Speaker 2:

So, jimmy, you said you want to talk about the musketeer yeah, here he is again.

Speaker 3:

He's always in the news, isn't he? Um? So musk, I think, had a um. He had a lawsuit that he had against open ai. Uh, he withdrew it in july, only to revive it later in the summer. So I think they've probably had a bit of a rework of the whole thing. Mostly, they've named a whole bunch of new defendants, including Microsoft, hoffman Templeton, a couple of new plaintiffs, a Neuralink exec who I think annoyed Musk by leaving Neuralink by the sounds of it, and an ex-open AI board member and ex-AI.

Speaker 2:

I like him more and more. Honestly, I've got to be honest. I didn't think I'd say this, but I do. It's not just because he's taken Sam Altman to court, but I find that I quite like Elon. Maybe we'll just leave it at that.

Speaker 3:

I used to. I think he's made some. I didn't used to, but I really do now. I'm the other way around. The more he gets into politics and getting his opinion out there, and Well you're not a right-wing nutjob conspiracy theorist like me?

Speaker 2:

are you Not, as far as I'm aware. So that's probably why.

Speaker 3:

But then you must be quite self-aware to be saying that, especially on a podcast that's going out to millions. So yeah, musk. So just to give a bit of background on this, basically Musk was one of the original founders of OpenAI, and when I say founders like he put a load of cash into it and in fact so he's argued that in previous complaints that he's been defrauded out of more than 44 million that he donated to OpenAI by preying on his well-known concerns about the existential harms of AI.

Speaker 2:

Elon needs that 44 million as well. He can't, he can't eat.

Speaker 3:

No, he's probably struggling, yeah.

Speaker 2:

He's earned that money, and this is not a joke. He's earned that money in the time we've talked about this.

Speaker 3:

Oh, yeah, probably yeah, in the last 30 seconds, I think so. So yeah, just as a bit of background, because people probably don't know about this, but OpenAI was launched in 2015. It was a non-profit organisation. In 2019, it was converted to capped profit, but Musk was um. In 2019, it was converted to capped profit, but musk was one of the like like I say, one of the driving forces behind it got together with sarn altman, like, agreed with all their principles and originally the principle behind the company was that it would be driven by a not-for-profit board that was purely about research and wanted to develop safe superintelligence or safe AGI or safe intelligence Ilya Sutskovich's company.

Speaker 3:

Yeah, safe superintelligence.

Speaker 2:

Oh sorry, but yeah, well, maybe that's why he started it, because he's trying to fulfill the original mission.

Speaker 3:

Yeah, exactly. So there's a few people that have broken away from OpenAI for exactly this reason.

Speaker 2:

I think literally apart from Sam Altman, I think all of the originals have all broken away. Yeah, musk Soskeva, I can't remember who. Who is it with the?

Speaker 3:

there's a Scandinavian anyway yeah, it's a shame you can't remember a name, so you can butcher it.

Speaker 2:

It's unusual for me, isn't it, to either not remember a name or completely butcher it, but the scandinavian yeah, definitely, I know who you're talking about.

Speaker 3:

Actually, yan something yeah, sorry, yan um so, yeah, so he's. He's back at it again. I don't know whether he'll win this um court case or not, but effectively he's saying that they defrauded him because this was originally supposed to be not forfor-profit. It's deviated from its mission statement. It's deviated from its not-for-profit statement and, I'll be honest, I kind of agree with Musk on this. I don't know how it'll work out in the courts.

Speaker 3:

You see you agree with him. Yeah, I do agree with him because, at the end of the day, it was founded on a set of principles and it sounds like he literally donated them cash based on these, based on the um, based on the principles that they're talking about, and the company's massively deviated from that since. So, um, we'll see how it goes. Hopefully they'll televise it and we can watch it. It'll be like the depth trial for nerds.

Speaker 2:

Or Frost versus Nixon. Thrilling Frost versus Nixon was brilliant and if you haven't watched it, watch the film. Everyone on this, plus you have. You watched the film? Is this the documentary? No, it's a film. Oh, it's a film that pretends to be. This sounds like the worst film ever. It's a film that is just the interview and then preparing for the interview, but it's absolutely brilliant. Michael Sheen.

Speaker 3:

I don't think I have seen it. Yeah, I want to add to the list. No, watch it tonight it's amazing cool.

Speaker 2:

Before I do my next one, I have to tell. At the beginning of the episode we talked about the algorithm and how it was telling Jimmy about all these sort of commercial uses for AI and I was saying well, the algorithm knows you. Jimmy's computer just popped up A chair workout for seniors with a picture of a man with which I can only say has got like stuck on Father Christmas beard and white hair, but like the most incredible body you've ever seen it looks like a buff, captain Bird's eye.

Speaker 2:

It does. Yeah, and for those that have never seen Jimmy, if you go back to last week's episode, we put the video out of it so you can actually have a look at him.

Speaker 3:

To be fair, I think this dude's got a better body than me, but I'm not sure about the rest of him. But not the rest of him.

Speaker 2:

Well, I'm not sure about the rest of them, but not the rest of them. Well, this is actually the perfect segue and this was not intended. No, it genuinely is into my AO Medical updates. So, in recognition of my attempts to be more positive about things and to face fear with hope instead of fear with fear, I've decided, as someone who believes that Big Pharma and the medical industry's got all of our kind of worst interests at heart, that I'm going to give some positives around medicine.

Speaker 2:

So I've got a few updates, and one of them the first one is actually about grey hair. So a study by researchers at New York University's Langone Health which I'm probably pronounced wrong found that melanocyte stem cells MSSCs in mice lose their ability to migrate and mature as hair ages. This immobility prevents the cells from producing pigment, which leads to grey hair, and the researchers believe that if these stuck cells can be reactivated, it may be possible to reverse the greying process in humans as well. So this is something that I should just say They've used AI to basically identify the reason why hair is turning gray, and they apparently think that they can use this to potentially reverse the graying of hair process. Unfortunately, they are not able to do anything against balding, so for most people it's pretty limited.

Speaker 3:

Hopefully that's next. I think it's pronounced Milano site as well, I'm not sure.

Speaker 2:

The way you said. It sounds like you know, so I'll go with what you said. I'm just making stuff up now. Researchers at the Harvard University. Yeah Harvard, yeah Harvard.

Speaker 3:

Yeah, that sounds about right. Harvard, oh yeah, maybe that's it, yeah.

Speaker 2:

Maybe Harvard could sound right, harvard, harvard. I haven't heard of it, but the Harvard University have developed an AI model called TXGNN, which identifies existing drugs that can be repurposed for over 17,000 rare diseases. I don't have a list of the diseases, otherwise it would be a long episode, but the model aims to expedite the discovery of therapies for conditions that currently lack effective treatments, potentially addressing significant health issues. So they've done fuck all to date, but they've identified something that might help repurpose existing drugs. I mean, I would say there there's a lot of existing drugs that have been repurpose existing drugs. I mean I would say there there's a lot of existing drugs that have been repurposed for COVID, but unfortunately certain companies have not allowed that to be pursued much further. So let's see if this is another one that gets held up by Big Pharma. I said I wouldn't talk about Big Pharma, didn't I?

Speaker 3:

But I've slipped there, I've managed to get it in just because I always slipped there.

Speaker 2:

I've managed to get it in. Just because I always want to. Okay, another one. A study in the UK has found that AI can accurately predict which patients require ambulance transport to hospitals, achieving correct predictions of 80%. It doesn't sound that great to me.

Speaker 2:

80% I would say 80% means 20% where they've got it wrong, but they can use this potentially to optimize emergency services and improve patient outcomes. For those that don't know so, for various reasons the pandemic, economic pressures, increased ages, etc. Etc. Ambulances are under a lot of pressure and so the reason this would be so great is not just the UK. I know it's the case in Australia, in the US, us, but I'm obviously sort of more aware of the uk. This would potentially mean that they could identify better which um patients actually require the ambulance transport. We've seen some kind of heartbreaking stories in the last few years of people waiting, you know, having a cardiac arrest and waiting, you know, 50 minutes for an ambulance, or people waiting even hours for ambulances. So you know this could potentially have a real like genuine benefit, and then this is one I like.

Speaker 2:

The only problem with this one is I tried to look at things that happened in the last month because I thought, you know this is a monthly roundup. This was from an article in the last month, but I've got a feeling this might have actually happened, you know, maybe a few months previous. But using ai, researchers have found a new type of antibiotic that works against a severe drug resistant bacteria. So I should say here this is not like a um, what's the word? Uh, broad spectrum antibiotic. This is something very specific. I'm going to try and read this and I'm sure I'm going to butcher it.

Speaker 2:

Acinatobacter baumani is a, a nosocomial gram-negative pathogen that often displays multi-drug resistance. However, this new antibiotic could, could so actually it doesn't even say it necessarily can could prove to be able to kill this drug-resistant bacteria. So actually, those medical updates with the exception of one of them, they're all kind of bollocks, to be honest, aren't they? They're all things that might have some use and maybe one day in the future they might be able to use to treat some disease, but actually they're pretty rubbish at the moment.

Speaker 3:

Yeah, it feels like Anchorman or something.

Speaker 2:

Well, it feels like I'm Anchorman right, yeah, like you're just reading this for the first time, I'm.

Speaker 3:

Ron Burgundy.

Speaker 2:

I am reading it for the first time. We were saying the last one. We don't rehearse these episodes, which would be a shock to a lot of our listeners.

Speaker 3:

No, perhaps we should.

Speaker 2:

So I've got a study here about chat gpt citations. Uh, the gist of it is they're a load of rubbish.

Speaker 3:

Another exciting one from you, then. Yeah, I know I've gone for all the best topics today, um, but yeah, so a um, the tau center for digitalism took a look at AI chatbots and how they produce citations or sources for publishers' content, and it makes for. Concerning reading is what the article says. So, for those of you that don't know, gpt recently released ChatGPT recently added search. So I think it's chat, gpt, search which is supposed to compete with the likes of Google and stuff like that. Now, one of the common problems with large language models is that they can hallucinate. If they don't know the answer to something, they'll sort of make something up. Or you know, maybe not entirely make something up, but get something slightly wrong. Um, so, yeah, this, this center for digital journalism.

Speaker 3:

They basically took a bunch of quotes from stories from different publishers and, with this as well, like, different publishers have signed up to actually agree, like, have agreements with chat, gpt, like with licensing deals and things like that. So it is a bit complicated, but you have got a couple of examples like so here, the New York Times, which is currently suing OpenAI in a copyright claim, washington Post, which is unaffiliated with GPT, and the Financial Times were some of the examples of things they so they searched for stories that like had shown up in these publications. The Financial Times actually does have a licensing deal with ChatGPT, so it's all a bit complicated. They chose quotes that, if pasted into Google or Bing, would return the source article among the top three results and evaluated whether OpenAI's new search tool so this is the latest search, openai GPT search basically would correctly identify the article that was the source of the quote. Okay, what they found was not good for news publishers.

Speaker 3:

Though OpenAI emphasises its ability to provide users timely answers with links to relevant web sources, the company makes no explicit commitment to ensuring the accuracy of those citations. Brilliant, it's a notable omission for publishers, who expect their content to be referenced and represented faithfully. The tests found that no publisher, regardless of degree of affiliation with OpenAI, was spared from inaccurate representations of its content within chat GPT was spared from inaccurate representations of its content within chat GPT. So basically, there was a whole bunch of responses from GPT where it basically either sometimes gave correct citations but also citations that were completely wrong and other stuff that fell in between. They say that ultimately, their conclusion is that chat gpt citations appear to be an unreliable mixed bag.

Speaker 3:

Um, they found very few instances where the chat bot didn't also. So they also found very few instances where the chat bot didn't project total confidence in its wrong answers. So this is another thing that we've seen from these um large language models, um, like chat gpt, where they will very confidently answer questions, and obviously in the case of this article, in this research, they've demonstrated that basically it's making stuff up and also basically saying it is completely true. They found that the bot rarely fessed up to being able to produce an answer, rarely like actually said that it couldn't produce an answer. Um, and it basically was like really assertive in what it was saying, uh, whether it was incorrect or correct, and so it wouldn't say things like it appears or it's possible or it might. It wouldn't say that it couldn't find the exact article. Yeah, so the article goes on to say in total, chatgpt returned partially or entirely incorrect responses on 153 occasions and out of those 153 occasions, it only acknowledged an inability to accurately respond to a query seven times.

Speaker 3:

Obviously, not very much. Only in those seven outputs did it say things like it appears or it's possible, or it might, or it couldn't locate the exact article so effectively. What it did was just hallucinate. Hallucinate just, and I and I, you know, without looking into all the detail behind the responses it gave.

Speaker 3:

It might have been a mixture of kind of correct responses or a bit of a jumble, but at the end of the day, it's supposed to be a search tool and so it's not supposed to be just making stuff up like a large language model does.

Speaker 3:

And not that they always make stuff up. They are, you know, they're quite accurate with a lot of the things they say. But this is where you're actually trying to research something and you want sources, um, and so your traditional search engines would almost always, uh, give you those, give you the first of all, give you links to whatever it is you're searching for. Usually they'd have it in. In this case, they would have it in the top three results every time, and then you'd be able to go and look into that for yourself. What they're saying is using chat GPT, whether it gives the right or the wrong output, it's not citing things correctly. It won't admit that, it can't find the information most of the time like 95% of the time, by the sounds of it and then it doesn't actually give you the correct links either.

Speaker 2:

So I think as a search tool it sounds like GPT has a little little way to go I found that I was just searching while you were doing that, because it I keep talking about gary marcus, but all this stuff kind of reminds me of things because I think he's just at the moment. He's kind of bang on with a lot of what he said, but, um, he was talking about chat gpt, uh, and saying so chat GPT has no real relationship with the truth. This was based on, I think, 3.5 and 4, so it was a little bit ago, but this is the bit that I thought was really good. Everything it produces sounds plausible, but it doesn't always know the connections between the things that it's putting together. You can't say at the beginning of a chat GPT session and expect it to work please only say true statements in what follows. It just won't be able to respect it. It doesn't understand what you mean by only true things and it can't constrain itself to only say true things, and I think that's the point. It it's not able to.

Speaker 2:

It's better, the reasoning model is better. But I think one one really interesting thing to go back to kind of claude again sometimes now in the latest update of claude, is when you ask it something it says. If I try and answer this I think I might hallucinate, which is really cool that it gives you that warning. It's obviously not self-awareness, it's just a plugin that they've put in that gets it to tell you, but it's telling you that it might hallucinate. Now, okay, the reasoning model 01 of ChatGPT is more advanced than Claude in that sense that it has those reasoning steps. If chat GPT is more advanced than Claude in that sense that it has those reasoning steps, but it you know, it should stand to reason that it still hallucinates because, ultimately, if it can't find an answer, it's set up in such a way that it tries to answer everything.

Speaker 3:

It's the most. In a way, one of the best things about this is it's such a human, like quality Bullshit. Didn't we do a song? Chatpc is bullshit on one of the episodes. I think we were ahead of the curve on this.

Speaker 2:

I think we did hard versus soft bullshit was. That's exactly what the song was.

Speaker 3:

Yeah, um but yeah, like this is something that humans do, right, we'll, we'll, we think we know the answer to something or we have an opinion on something, and and that jades our answer, and so we'll, confident, we'll maybe confidently give an answer. But it's pretty dangerous with these language. I think it's more dangerous with these language models, because everyone's chatting to them all the time, people are relying on them and I guess there's a bit of a human instinct which is that, oh well, the computer said it, so it must be true. I feel like a little bit, to a certain extent, which is dangerous in the first instance. We've talked about critical thinking before. But yeah, continue to apply your critical thinking to anything that ChatGPT says or any other LLM.

Speaker 2:

So the last item on this week's or this month's update. So, um, there is a deep fake detection browser that has been designed, and one of the cool things is it's been designed in the uk, because we don't have, obviously we don't have a sort of domestic large language model in the uk. We don't really have much in terms of developments, although obviously deep mind was a, you know, originally a uk company, but this browser sounds pretty cool. So it's a uk startup called surf security and their aim is to combat the growing threat of deep fakes, which are increasingly being used for fraud and misinformation. So this tool, which is touted as the world's first deep fake detecting browser, is designed to identify ai generated audio and video with remarkable accuracy, currently up to 98. I mean, that sounds good, although I'd like to think I would get close to 98 myself.

Speaker 2:

Um, but pretty good and it's still a work in progress on deep fakes, apparently allowing users to discern well ai generated audio and video. So it doesn't say deep fakes, it's able to identify on 98. So I think this is across the board. But obviously um, with some of them, if they're not deep fakes, it would be identifying them, that they're not a deep fake. But it allows users to discern whether they're interacting with a real person or an imitation. So I'll just give you a like. These are the key features. So I should say this is something that is not um or the full version of it is expected to be released last year, so I think it's something in beta at the moment. But this is what they talk about, the key features.

Speaker 2:

So neural network technology the browser employs advanced military grade neural network technology. So I didn't know they were using large language models and neural networks in the military. It's quite shocking, what a surprise. Yeah, specifically utilizing what they call state space models, which I've no idea what that means. In the military. It's quite shocking, what a surprise. Yeah, specifically utilising what they call state space models, which I've no idea what that means.

Speaker 2:

But their approach of the browser it analyses audio frames for inconsistencies effectively, detects deepfake content with various languages and different accents. Speed and efficiency it detects. The detection process is incredibly swift, with a system capable of identifying deepfake audio in under two seconds, and this is obviously critical in scenarios where you need to make sure you're getting correct sort of timely information. It can be used across various online platforms, including communication with software like whatsapp, slack, zoom, google, meet. So you simply press a button to verify the authenticity of audio, whether recorded or live, and it doesn't say how it does it. So I'm on the audio thing. I'm not as convinced as on video. And then future enhancement. They're looking to expand to integrate ai image detection into the browser toolkit, further enhancing its utility, and identify manipulated media. So it sounds pretty cool. Yeah, it sounds awesome.

Speaker 3:

it sounds like something that we're in desperate need of as well, I think on the my first question is who controls it?

Speaker 2:

Because it's the same argument as disinformation, misinformation. Who decides it? Disinformation, misinformation, you know, is it run by left-wing media or is it run by the far right? I mean, that could be a problem, but let's work on the basis this startup has there has all of our best interests at heart. I think this is a really, really positive thing.

Speaker 3:

I look forward to seeing this yeah, no, it does I mean the 98 thing you jokingly said. I think I could detect 98 I was joking I can't.

Speaker 3:

Yeah, I think the thing with that is in a, in a sort of what's the word? Like a, an environment. If you said to me, like there's these ai image sites where you can compare ai images and real images and and you and you have to like guess which is which, in that situation, like I can quite often guess them most of the time um, I know a few tricks on how to do that around contrast and a few other things, but even that, like quite a lot of people find that difficult, even when you know what you're looking for. I think the challenge around deep fakes is like half the time it's going to be, you know, it's just going to be like a facebook pop-up or a youtube ad or whatever it is. Like you're not, you don't know that, you know, I guess in those cases, do you really?

Speaker 2:

really care Like when you're thinking about deep face. I'm thinking here about things which are dangerous and going to manipulate or mislead you. Yeah, that marketing stuff is. It doesn't matter if it's a deep fake, you know may. Okay, maybe I'm more likely to buy it because it's got someone from you know blue pink in the video black pink it shows how cool I am in the video, or you know, or other really cool people like jeremy beadle or whoever else you, the kids, are into these days.

Speaker 3:

Where's that a reference for?

Speaker 2:

so I know what annika rice kids are into these days. My daughter's a big jeremy beadle fan, justin bieber maybe I'm thinking of justin bieber, I think, I think no, I wasn't, I was thinking of jeremy beadle, I think justin think Justin Bieber's, even out of date.

Speaker 3:

A reference now.

Speaker 2:

Well, justin Bieber now oh is he back in? No, he's back, but he's well, he's got. He's got a Bell's pause he caught, caused by long COVID.

Speaker 3:

So he has.

Speaker 2:

He, yeah, but he's been singing some Christian songs recently, so maybe he's getting better. So shout out to.

Speaker 3:

JB, jb and Jeremy Beadle, the real JB, on this topic. I just found a random Reddit post which was the gist of it was. Well, actually the gist of the comments on it were that captures are soon gonna have to just become memes, because ai can't figure out memes and uh, captures are getting pretty ridiculous and yeah, I had one the other day and it was click on the item that can be folded right and there was an image of two pandas, two lions and a t-shirt did you get it right?

Speaker 2:

I got it right, but it took me five efforts because there are two pandas and two lions. I'm a big, I can fold a fucking panda you can fold a panda.

Speaker 3:

I quite look, I quite look forward to the day where um captures just become memes. Maybe we should invent that captures as memes, meme captures.

Speaker 2:

The problem is like I'm not sure everyone you think ai can just do like nonsense, because that's something that we can still do pretty well. I mean us in particular um what absolute, absolute gibberish. Just verbal diarrhea if, if the air can't engage in a kind of verbal diarrhea conversation, then it's, then it's, it's an ai yeah, coming next year gibberish gpt I just finish off with on the kind of this is not quite deepfakes, but it's very similar.

Speaker 2:

Something I saw the other day that I thought was really cool was there is a it's an AI granny who basically answers AI, prank phone calls not prank phone calls, ai, you know, kind of AI phishing phone calls like the marketing calls and this AI granny can then engage for like an hour and keep the ai on the phone having a conversation with it. I don't know. I don't know if it can go more than an hour or if after an hour it cuts out.

Speaker 3:

Not necessarily keeping an ai, keeping a prankster on the phone no, no, it's, it's based.

Speaker 2:

So it's based on the fact that there are all these ai calls right which are now sent out, where an ai calls and just tries to sell something to you. But obviously if you've got a real prankster, then once you start talking, nonsense is like, okay, I'll put the phone down, but it somehow is able to keep engaging with the algorithm of the ai so that the ai that is calling you with not, it's not a prank call, but they're just trying to sell you something. But it gets tied up.

Speaker 2:

It just kind of ties it up for like an hour and the image I saw was was like an old lady with glasses, but it's it's the persona that it's set is an ai granny, which is designed to kind of combat these kind of ai. Like I said, they're not prank calls, but these kind of ai phishing calls or whatever. So I thought that was a pretty good uh idea. It's obviously not deep fakes, but it's a way of um, of countering and battling it. So we've had had two examples here of tools that are coming out to try and counter some of the negative imminent uses of AI. So I thought that was pretty good.

Speaker 3:

I am definitely going to try AI Granny out. I don't answer any of my phone calls anyway because I live in China.

Speaker 2:

You're going to try AI Granny out.

Speaker 3:

But I'll try. Ai Granny.

Speaker 2:

I'd like an AI Granny.

Speaker 3:

I'm going to try it out on you next time you phone me.

Speaker 2:

I've never called you, that's true well, I think there's probably a good point to end this episode on. So, um, yeah, thanks for listening.

Speaker 2:

Uh, I like your socks, by the way so we should be back next week with another china themed episode. It might be part two of the interview did with chrissy loke. It might be another china one that we're recording um, or it might be one on military uses of AI, which is absolutely nothing to do with China. But keep listening, subscribe, send us some feedback, because we really could do with that, and listen to Jimmy's song Bye, archie Knox, rhythm of Machines.

Speaker 1:

Rhythm of Machines. Rhythm of Machines. Thank you, we are the machine. We are the concrete, thank you, we are the real. We are the real, thank you. We are the team. We are the team, thank you.

People on this episode