Experts in the Loop
Experts the Loop brings you inside Australia’s AI frontier. Hosts Chris Sinclair and Mark Monfort sit down with founders, leaders and experts shaping the digital AI market, uncovering the products, journeys, and ideas driving AI adoption. Smart, unfiltered, and a little cheeky — it’s your backstage pass to the people redefining Australia’s tech future (and the world of course).
Mark Monfort, the tech wizard behind the @AusDefi Association and NotCentralised, isn't just a name—he's a legend. With blockchain fin-tech victories under his belt, he's now on a quest to build the ultimate #LLM, SIKE.ai, enhancing business workflows and securing data like a true digital sorcerer. Nothing can stop him!
Chris Sinclair, the design guru and UX/CX mastermind, knows the secrets of digital innovation and business strategy like the back of his hand. Partnered with Digital Village, a league of specialists leading the charge in product development and innovation, Chris is here to prove that the old ways of working are no match for the future!
Get ready for epic discussions, expert perspectives, and a sneak peek into the future of digital innovation. Don't forget to like, subscribe, and stay tuned for more episodes as we explore the frontiers of technology with a dash of humour and a whole lot of superhero flair...or fails!
Experts in the Loop
Ep 32 | ChatGPT, Gemini, Claude and Perplexity DeepResearch Battle
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In Episode 32 of Digital Nexus, we go from the bush to the boardroom. Kev the Yowie returns as we unpack the latest in AI trends, Google’s Gemini V3, OpenAI’s new business features, and Perplexity’s research mode. We dive into AI adoption in Australia, explore how Perplexity outperforms in real-world research comparison tests, and showcase our entry into the world’s largest AI hackathon. Plus, we reveal how ambient agents and agentic workflows are quietly reshaping knowledge work and how businesses can futureproof their operations.
🛠 Built with Bolt
🚀 Sponsored by FEX Global
🎯 Stay till the end for a side-by-side showdown of ChatGPT, Gemini, Claude, and Perplexity.
⏱️ Chapter Timestamps
00:00 – Kev the Yowie returns + Intro banter + AI work we've been up to
12:33 - Sike demo update
13:52 – AI adoption in Australia: Productivity Commission report
17:28 – AFR AI Awards + Aussie winners
18:11 – Andrej Karpathy’s GPT model breakdown
21:00 – Mary Meeker’s return with 340-page AI megareport
24:05 – AI World’s Fair & newsletter automation workflows
33:12 - Mistral Code
37:05 – More OpenAI news: Memory upgrades for free users
39:20 - ChatGPT connectors
46:20 - Bolt World's largest hackathon and discussion of our joint project
52:00 - Comparison Deep dive: Perplexity vs ChatGPT vs Gemini vs Claude
Show links:
Aus Gov releasing AI Adoption Tracker - https://www.industry.gov.au/publications/ai-adoption-tracker
Karpathy explainers on current state of LLMs - https://x.com/karpathy/status/1929597620969951434
AFR AI Summit Articles- https://www.afr.com/afrlive/ai-summit
Mary Meeker report - 340 page - https://www.bondcap.com/report/tai/
Mistral Code client - https://techcrunch.com/2025/06/04/mistral-releases-a-vibe-coding-client-mistral-code/
Anthropic AI writing it’s own blog articles with human oversight - https://techcrunch.com/2025/06/03/anthropics-ai-is-writing-its-own-blog-with-human-oversight/
Chat GPT updates for plus users - https://www.financialexpress.com/life/technology-openai-upgrades-chatgpt-for-free-and-plus-users-check-out-new-features-you-can-enjoy-3868505/
OpenAI introduces new business features - https://techcrunch.com/2025/06/04/chatgpt-introduces-meeting-recording-and-connectors-for-google-drive-box-and-more/
Other Links
🎙️our podcast links here: https://digitalnexuspodcast.com/
👤Chris on LinkedIn - https://www.linkedin.com/in/pcsinclair/
👤Mark on LinkedIn - https://www.linkedin.com/in/markmonfort/
👤 Mark on Twitter - https://twitter.com/captdefi
SHOWNOTE LINKS
🔗 SIKE - https://sike.ai/
🌐Digital Village - https://digitalvillage.network/
🌐NotCentralised - https://www.notcentralised.com/
YouTube Channel: https://www.youtube.com/@DigitalNexusPodcast
X (twitter): @DigitalNexus
Welcome back folks. This is another episode. Episode 32 of, uh, the Kev the Yowie no, no Digital Nexus podcast. But you will see Kev the Yowie here. So really interesting stuff. Google Vo three. We've been playing around with bit hit and miss. But you know you'll see the hits. You'll also see the misses. And then we dived into some really interesting comparisons, big trends that are hitting in the market uh, particularly in Australia focusing on AI adoption. Uh, we look at open AI, Gemini, Claude perplexity in particular in how they conduct their research, all that and more in this episode, plus what we're going to be doing about a cool hackathon with only 100,000 people likes, um, signed up to it. But yeah, you'll see more of that in the show. See you there. How can we improve your health? Oh, cheers. And bright new light. Future forward. Um, I think we would. Kev the Yowie. Ready? Yeah. Look at that beauty. I could rip a fat cone up here. In all seriousness though, I hope this channel brings you serious joy, if not off somewhere else. No, no. I'm kidding. Now look what we've come across, guys. We have to be careful of snakes, though. Scary they are. Well, we've found camp next to this tree, but it's pretty open and it's creepy out here. Well, that's it for the night, guys. Next episode, we'll, uh, hopefully make it to Tom price. That's if I don't get eaten out here by some scary. Anyway, uh, welcome back to another episode of Kev the Yowie. Uh, because we are I in Australia. If you don't know what a yowie is, it's like a drop bear. You'll you'll see it. But, uh, episode 32. We're here, we're here, we're here at the Fex global, uh, market site. Um, thank you to our sponsors. Uh, the amazing, uh, space that we've got. Um, we do another podcast. Very lovely. Uh, do another podcast here for blockchain. Um, I'll put a link to that in the show notes and stuff, but, uh. Yeah. Chris. That's crazy. Did you. Did you like Kev? Uh, he's my new icon, to be honest. I've, uh, I've got he's got his own page. I've got his posters up on my wall. Yeah, I've got, um. Uh, it's not the only video he's made. He's made a full movie series. Cinematic series. Now it's, uh. Um. Kev fights back. Um, Kev the Daily Worker. Um, Kev owns a corner shop. It's, uh, it's pretty cool. And then. And then there's two guys, Kev, in a pizza place. Then there's this. Now Will there. We got to pray to the demigods because Google Vo three is out and I've got access to it. So if you got requests for stuff to create, hit me up. But it wasn't doing sound always right. You were having a trouble with it, so you had to go back and be like, do not ignore putting sound in the background. Look, I literally said, um, here, do not make a video without the sound. And it's happening a lot for you as well. You've done it. Yeah. Like this. A lot of people are complaining about it, um, that it's it's doing funny things. Let's see if this works. No sound. Womp womp womp womp. Even you said don't. Yeah, yeah I know people are using, uh, flow as well, so maybe it could be people are passing it through flow to probably get a lot of their sound effects, but, um, I'll try one more time. Just we'll come back to it in the background. So that's literally what you do, folks. You just do it there. Do it again. Just do it again. And there's ones where it did work. Um, but in any case, that's that's what's going on. Uh, what else has been happening for you this week? We'll come back to that in. Yeah, it's been a busy week. But I mean, like one of the things that took up most of my stupid mental space is, uh, is this pretty little device I brought this in just to show off to you. It's the new Nintendo Switch two. I know it's not I unfortunately, I inside, I don't know how they haven't considered any artificial intelligence. Integration into this tool is actually quite baffling to me. Even if it was like frame generation, which all the consoles are doing and, and other elements, they just haven't done it. But you start gaslighting Mario into to understanding that he's stuck inside of a box. So is there a new Nintendo Switch, a huge like 7.9in screen or something? Uh, wow. I don't know if you can get this on camera, folks, but it is beautiful. It's really nice. It's a good handheld. It's got so like, I got Mario Kart Mario Kart bundle. I'm going straight, I'm going straight, and then I crash into some mud. Uh, okay. Game over. So, yeah, it's pretty cool. It's pretty awesome. I mean, I just love, uh, even this little. I love mud crash. The biggest change is these these, uh, devices. Because this is what you can do to play the games like nunchucks to give you this. And we could play together. I would go afterwards. Okay. All right. And then they just really satisfaction of. Okay. That was satisfying. Well, speaking of satisfaction, like with, um, hardware devices, the whole Jony Ive thing I'm excited to see, like, whatever they do end up coming up with because this is the guy that was all about design, whether he borrowed those design principles and ideas, I don't know. Everything's built on the back of, um, everything else. Everything is a wrapper of everything else, so who cares really? This is just like a continual iteration of the 20 other devices that failed before it. Yeah, exactly. Um, so here he is. And then so clearly a big week. Yeah, that's been a well, that just came out yesterday. So that took a lot of, uh, you know, a lot of energy and mental space for me not to play it yesterday. Oh that's not. Oh sorry. Sorry. Okay. I hope she doesn't watch this in Australia. It's a hefty $769 for the bundle. You're not supposed to say the price. It's crazy expensive. But if you do consider the the inflation, it's actually the same price as what it was. There you go. 20 years ago. But. Yeah. Perfect. Um, but it's a lot of money that people think about. Yeah. Um, but other than that, man, it's been a packed week. We, uh, we've we've had an adventure together. We're going on an adventure. We're going on an adventure. About that. Yeah, we'll talk about that later in the show. Hackathon, hackathon and bolt, which are pretty. It's pretty exciting. Some fun stuff happening there. I've been, uh, planning around the website for, um, for the for the tool that we're building the virtual, uh, sales tool. Uh, yeah. Sam, I like a website. I just quickly built because I was I had a couple of meetings with the people, and I can show you some stuff up. So we had we put that up and it's like prototype quickly now. Yeah. Well that's it. Yeah. Well, actually I didn't use it for I actually didn't prototype it because I wanted the long term SEO effects, which I know you struggle with, but, um, I originally I built it in bolt, I did, I was like, oh, I like this style. And then I ported it into it. That's the thing that people learn. And just for a lesson here, folks like bolt is great as like a tool for logging into, you know, but you still need and there's some hacks that I've been playing around with. Um, I'm not quite fully tested yet in terms of SEO hacks to make sure that bolt can be read by like SEO scrapers and Google. Well. But the best and safest way right now is to build with, um, Webflow or WordPress or Wix or Squarespace and have a normal site. And what bolt is for is for that logged in experience, because those scrapers won't look at the logged in part that you need to have something where they can be pointed to. And it has a lot to do with what bolt actually builds like a bolt primarily is it builds great application experiences, web apps, software, um, you know, it's it's a full stack back end solution. It loves react can do, um, JavaScript and node and all these other things. But because of that react and those types of tools, they're not built around SEO, right? Application experiences. And therefore an SEO is not built around SEO because it's not what you put out there in the world or back in the day. You didn't anyway. Maybe they'll have it like later on. Just watch this space. But just like for the time being, if you want something that is properly SEO, um, SEO, just basically so that people can find it, you know, in the category that you're after. So a lot of like thinking and marketing goes on there. Um, plus I'm building a bunch of other stuff in bolt, and I don't want to burn all my tokens, build a just a landing page for a platform. I was like, I'm gonna put those tokens into the other stuff that actually is worth putting the tokens into. What was the other big part of it? We will jump back into some bolt stuff with the hackathon that we're being part of. Uh, a little bit later, but, um. Yeah, man. And yeah, and on the other side of it is just, uh, a lot of consulting work with, um, doing, uh, the, the extending on the research piece. We've been working with the Smith family for not, not for profit organization on a lot of optimization for their conversion funnels on their website. Oh, cool. Um, and we conducted a lot of research into the market. And actually, perplexity was a really interesting tool that supported me on that. On that journey, I talked about, hey, Juno in the last couple of weeks and this experience. Um, but I want to do a bit of a comparison piece later on. Oh, cool. Talk about that. Because the the update that they did last week, which we talked about that last five minutes at the end of our episode. It came out while we were talking. Um, yeah. They did a really interesting research, uh, update. And I really liked what it put out a lot of validation to do in terms of the accuracy of it all, but it was, uh, really interesting presentation style. And we'll talk about that. Yeah, that's exactly what we need. How are you doing? I know you've got a lot on. You've got even more on the oh my God, I feel overwhelmed. No no no no no no no no no no no. But thanks it's there's there's there's definitely a lot on as always and stuff. Um, a lot of blockchain kind of stuff in terms of podcasts. Uh, and then in the AI side of things, I mean, it's been building for this hackathon, it's been building out some further features to like other tools, like I've got psych, which I show every now and then. We've got the agents. Um, it just made sense to have, uh, because right now, like the agents, you can only before, you know, new, new era here. Um, but before new, new era. But before that, you you could only have each agent had to actually select its own documents. And especially when you've got something where, um, you know, you've got 20 agent, 30 agents, 100 agents, which you can do in the tool. I don't want to have to select every time there's a new workflow and change it. No. Now I've got a global list. And if these agents need to refer to a global list, I just change the global list and they're all set. So it just helps so much. And the little things what are you doing to do that? Um, what do you mean in terms of using a tool to do that, to do the selection of. So are you prompting. And then it's automatically selecting which tool to use. No, no this is like this. Oh it's in within psych. Oh that's awesome. Sorry, sorry I was I was saying that I'm I'm just you know, what's been busy is like, just further updates to psych and the ability to, you know, a few weeks ago, we made it so that you could download all of the agent workflow as a markdown file, like a text file you could have if you've got access to psych, you could click on it and you would be able to to get the update. Basically, like you could see all the you don't have to rebuild everything from scratch. And especially when you're getting to longer agents, you want to have things that are just a lot easier to, to, um, copy and whatnot. And then another thing is just, um, we had cloning of projects, so, okay, I've got this workflow I just want to clone and then tweak it, just little things like that. But they all add up in terms of just value adds for users and clients, and folks that are using it have been commenting. So it's validation. So I'm just, you know, how does that iteration kind of how do you get the how do you get psyched to understand which model is the best. No, we don't do that. I'm pre choosing like the you are preselecting. All right. Cool. No the the there's only one model that you've got in psych. Yeah. Um so GPT 4.1, which for most tasks 99% of the task, it is just the best tool. And yes, there's an argument reasoning models versus like just the normal models. But for agentic workflows, our thesis being that we want our users to have that control rather than they click a button and who knows what the agents are going to do. These are users that actually want to set and be able to see if things go wrong. They can go back in and see which agent out of the 2030 did the thing that the others then started falling apart on. So, um, it's because of that. And, you know, the thesis that you could have a lower grade model, but if you've got really strong prompts and a framework around which they're doing stuff. It will beat reasoning models, um, because they want that reliability. So we just have the one model. We'll do other stuff later. We'll play with reasoning models and stuff later on. But it's funny, like the majority of use cases, we're so advanced in AI in terms of what we talk about here. Yet the real world, the ones that are end users, that will get benefits from this, they don't need the fancy stuff yet and like and your target demographic is also businesses and organizations. So you're trying to build something around the workflows that they are engaging in. And for the purpose of psych, it's obviously there's a lot of connectors into the, you know, the, the, the, the assets and content and documents and research and emails and other tools that are just like all in their backlog. I'm just quickly showing you here like, um, what that what that looks like. So if I just. Internet's slow. So, so pardon me. Um, so I bring it up, but yeah. So the and on that business side of things that that's where the model is, is important for the purpose of what it's saying. They're not needing to do the general general search and ideation for the purpose of your customer. So why have those models in there? So here's something where I can load like a previous, uh, like example. I've got the global variables thing here. Um, if I wanted to do things like, uh, clone, I can clone that, you know, open it up in others, uh, run this one that I have opened up that has access to certain documents and it will just, you know, you know, go through that. So, um, obviously, I'm, you know, going through some just checking example medical stuff, um, which would be interesting when we get to the healthcare thing we're looking at later. I remember when you first launched this, uh, well, when you first built some of this stuff, you already had a lot of those ancient flows. You were doing agent Flows before agent became a. They just didn't look as terminology. It was. Yeah. It was. It's interesting. I've always said funny. We went from like Asian flows as an industry to going, no, we need complete agent autonomy and stuff. And then people did that and then folks were like, okay, this is cool. But then I don't have control, I lost control, so then it's gone back to like the middle, the pragmatic middle folk. That's where it's at. Anyway, shall we get to the news? Yes. And bloody hell. Um. All right. Quite weak in comparison to the two weeks prior. What a quiet week. Uh, I wanted to bring up this one, which is the Australian Productivity Commission has, uh, done a report on AI. They said that given Australia's a service based economy, 80% of the workforce is in services, apart from manufacturing and mining and agriculture. So the promise of AI is high, but they want their regulatory issues like updated. And um, it's important on the access to data thing. So this is on Bloomberg. Uh, there was a really good interview. You'll see it in the show notes here. Good old Australia. What a great view. It's not looking like that today. It's not as sunny today. Um, it hasn't been sunny like that for the last two weeks, but some really interesting. No, some some really interesting stuff going on there. And then to to coincide with that, there is this, um, let's call it an AI adoption tracker, because that's exactly what the name is targeting Australia as well. This is this is really interesting. Yeah. So Australian Government Department of Industry, Science and Resources um they've got this. So some insights from it. They've also got like a dashboard. Gotta love a good data dashboard. So let's look at AI adoption in Australia. Good bad who knows. Anyway this is triggered for me a question I will now ask. Perplexity power BI look at this folks. Um, this is the world I used to be in and my stomach is churning. Looking at ETF tracker ETF projects for this ETF tracker, which is what I used to use before was built on power BI for the public version, but with the public version that you make available for free, um, for people to use. And you built that as well. Yeah, yeah. And that was really popular, but really strict in terms of how you build with it. And look, it's working really well because I think the site is down. I would even go as far to say strict is it's limited as well. Yeah. So we've gone from like ETF tracker like looking like, you know, the old power by having to wait for it to load. So now you can just build your own. You know, using AI tools and stuff. So what you can see there on screen is a revamped. Not power BI. I'm not paying licenses. I'm paying for server costs. I'm paying for, um. You know, hosting. But I don't have this any data which is clearly working. What's going on, folks? Come on. Um, the survey, they've got some more information about that. Um, about, you know, how they actually go through it. Let's just go to one of the reports, um, rather than the data dashboards, um, adoption trends for back then. Um, this was, uh, up to September 2024. Let's do the new one for Q4 2024. So late last year, some high level numbers, 40% are adopting AI, 21% not aware of how to use AI opportunity in 38% not planning to. So, um, what is going on folks? We are being left behind. Industry wise. Services is high. Um, in terms of adopting the lowest is manufacturing Ring hospitality as well. Um, and then health and education. And we're tackling that health and education side of things. Um, because there's a lot of inefficiencies, as we've discovered and, um, as I've known a little bit about, but by doing this bolt hackathon in healthcare, we're discovering a whole lot more. Um, so that's a bit of news and definitely worth checking out the report and hopefully the, um, the power BI dashboard does come back, but hey, um, Government of Australia, Department of Industry, uh, if you want better tools, happy to help you build something. We do a lot of engagement. You've seen the stuff we've done for Aussie Defy. We could just do that, right? Um, anyway, we can build it on your own infrastructure, blah blah blah. Call me, call me, call me, call me now. Um, other interesting one, just back to Australia was the Australian Financial Review I Awards, and we saw some locals like Danny Liu of University of Sydney winning a major award there on, uh, the Education and Research Prize. So folks behind that here. So this is Katie Ford from Microsoft. Um, there's also mentioning are paging who we work with as well, and a supporter of the community. So great to see acknowledgement of the local community really pushing. Despite the numbers of not much AI adoption and the tracker not working. Yeah, yeah yeah. Um, it's a great thing. There are there are the insights into the AI adoption that we've built in power BI. There is no way. There is no way. AI adoption check back in a few years. I'm like, all right to adopt AI? Okay. I'm so sorry. So anyway, so so that was interesting. And then I get into some other stuff. But do you want to jump on to something else, or should I just go through. Keep going, keep going. All right, I'm on a roll, folks. Uh, Andre Karpathy the great, the noble, the esteemed. Uh, Andre, um, has done this, which is an attempt to explain current ChatGPT. So you're reading this now? It's literally. What is this? He put this out on June 3rd, so it's it's this is already old. This is like it's it's it's out of the car park. It's and there's actually he's got to rewrite this. There's been updates since. Yeah. So he talks about, um. What people don't know that oh three like he, he talks about the different models. Oh three is obviously the best for, for important hard things. Good reasoning model. Much stronger than foro. Foro is different from zero four. Yes I know lol. Um, it's a good daily driver. Stupid as naming. Can someone fix up their naming and stuff if they keep talking about it? And then they release a new model and they call it like like you know, it's cute 3.10 or something. Yeah. Yeah. In any case, um, he's he's I know why they're doing it. Yeah, I get it. They're like, obviously whenever they keep updating things, the models have different outcomes and they find different strengths. So rather than overwrite something that is still doing something, well, they just added the version of it. But it does create a cluster of confusion. And then they had 4.5 which came out before 4.1. And then they just quietly got rid of 4.5 mentions and stuff or made it less. Well, it's still there. It's just hidden. It's hidden. It's more hidden and stuff. Which anyway, 4.5 came out before 4.1 begat whatever, I don't know, but he's got a good, uh, you know, thing here that, um. Um, yes. Good old Bible reference, uh, good image here. Um, just describing it and, you know, great for most tasks. Anything easy and fast. Oh, three for anything hard or important because it takes more time. GPT 4.5, maybe for creative writing oh one Pro don't use he says 4.1 mini don't use, he says 4.1 vibe coding. Really interesting. So um, and then deep search that many. Yeah. Do not use according to to Andre. Um and so so that's one I mean some of the is that sorry. If you go back to that image, there was um because some of those are only accessible via APIs now are they. Yes. Like 001 for example, you can't get in your plus models or is that I don't know access in the. Yeah. You maybe if you click on more models they're on yours I have. Yeah. Yeah okay. So so Chris can't access it because he's not special. Um, I don't know I think I think it is. I probably got the same as you. Um, but in any case. Or maybe it's a $200 plan. Access could be. Yeah, maybe. Yeah, it probably is. Anyway, don't use it anyway. It doesn't matter. Yeah. Be careful. Um, I wanted to jump to something else, which is, like, really interesting. And for those that are in the tech space that have been following it for a while, and I before the whole ChatGPT kind of world, there was an AI world before that, machine learning, forecasting, other kind of models and stuff. Neural networks were even around before. Um, but Mary Meeker is a a claimed analyst in the finance space and would look at tech trends. And the Mary Meeker yearly report on tech trends was always, um, a a big piece to dive into. She'd been quiet for a few years and then comes out with this banger of a 340 page report. I'll just go through just to show you what this looks like. So you get a lot of these, like, comparison charts, like pictures painted thousand words. And she really does that here, um, looking through different companies I and work trends so USA it jobs versus non I so IT jobs have been increasing like rapidly throughout 2025. Um I versus non I kind of focused then there's like comparisons to previous years. Um things like how long it took for uh printing press and like uh transistors flight and all this kind of stuff to like, become more mainstream and how rapid adoption is for, for AI, um, compute cycles, what it used to look like before, over the years, and just in the AI era, we're in the tens of billions of units. Um, it talks about, uh, training data, set sizes, the increase in that. Um, but at the same time, whilst training sizes are increasing inference or what is used, whenever you make a request on any model that you're looking at every time you make a request, there's tokens. That's the inference time. That's the thinking time that it has right at that point where the user is using it. Those costs have just plummeted, right? So making it like it's hardly anything to just, you know, you can do stuff on your own. Kind of. And we do this for our websites where it's like the public facing version. Happy to have AI in there because it's such a low cost thing to, to put. Anyway, um, a whole lot of charts, a whole lot of comparisons to previous years as well. Um, in terms of milestone timelines as she's got here, top things I can do today, according to ChatGPT, um, and just a whole lot of charts. So if you're a lover of charts, a lover of comparisons, um, and seeing how things compared from, uh, other tech trends and how they grew versus like, AI, this is definitely a report for you. I'm up to page 53 and there's already so much here. So, um, there was a really good video that dived into it. Uh, if you don't know the AI Daily Brief, um, they've got about 60,000 subscribers more than us. How many subscribers do they have? 60,000. So they they're doing well. Yeah. 60,100. Yeah. Hundred and one now or whatever. Anyway, 102 I'm signed in on my Odds Defy page, so like I follow him already, just add another random subscription. Yeah, exactly. But he goes he goes through the report and he explains some of the charts here. So worth looking at. Um, to check out. Next thing was the big Chris I. This is crazy. There was a ton of stuff to go through here. Um, but I'm going to highlight, um, more of this and this, and then I'll jump over to you for some other news. Um, but I engineer had their World's Fair. Big world fair. Have you heard of those, like, old world fairs, like in Chicago, where they literally put up in the 1920s, 30s wherever, like buildings and, like, built little, little mini towns and stuff and. Yeah. Yeah, definitely. And it's like, you know, all the technology go back even further like the old Western. But yes. Yeah. Crazy stuff. Right. Um, I'm just going to go to my where was this hosted this particular one uh, online Chicago as well. Oh is it. No it was, it was all online. And the reason I'm just going through my notes here that you probably won't see because Chris will edit that is I just want to go to some some particular things that they had. So this particular section, uh, and it is on mute because like I don't want the sound for this like coming so Ashish. Oh no. So this is before her. So let's, let's just go back to 34 minutes in this one here. So this is a really good, uh, um, kind of image that was shown on how this person, like, they do a lot of writing like daily emails and stuff to people newsletters and what he does in terms of his pseudo code. So it's like natural language, like you can understand and read this, but it's not actual code. It would be converted into something. But he goes, he's got a pre-process. He uses a knowledge graph over his existing sources of information. He does parallel processing for planning, structured data extraction and doing tool calls like he might need it to to parse things like different tools that you attach to your eye mic, etc.. Then he's got this analyze, authorize, act kind of process. So it does summarize human in the loop. So he checks the work and then it writes to the tool call. So first it's just reading which tool calls it's going to use. Then it writes it and then deliver interactively. So actually generate the code the, the UI. Uh and then it after all of that is published, it's like with newsletters like he actually brings that back in so it can go into, um, memory for continued evaluation and stuff. Like, I've done a lot of newsletters, but I haven't done the, the linking of like, okay, you've done all these ones in the past, keep on improving and stuff, which is really cool, a useful like thought kind of model to think about when you're building anything. This is for newsletters, but it could be for tools a second and then I've got a third. Is um, human input to AI. Where is that one? So if we go to 33. So that was a good one that he had there another chart that I think is, interesting is I think it was this one. I might have to. Oh yeah, this one here. Chris, this is this is actually really interesting. So what's going on here is how much effort human do you give to how much output effort I gives to you. So in a world where it's copilot, right. You the human, you're one value of effort, right? Versus copilot giving you half effort. So if you have a look here, um, it's ChatGPT. It's 1 to 1 copilot. They've got it as less than, like, you know, less than one for the copilot. Just helping you on the side. ChatGPT you're directly talking to it when it's the series of models, the reasoning models, simple agent kind of workflows. Like like what I described with Cyc. It's a one human output to about ten, um, output from the AI ten. In terms of value, you get 10 to 1 kind of value ratio, deep search research and notebook. LM and what it does, especially with the podcast stuff, remember? Um, it's like a 1 to 10,000 is what he's saying, that, you know, you get back in terms of value and it's kind of right now whether or not, you know, deep research actually gives you back something of value. Eye of the beholder. But then there's this opposite model. Think about this. What if it's a zero output versus one value that the agent gives you? What does that mean? What does that mean? What that means is that you doing nothing. Having these AI agents sitting in the background doing something for you. Does that make sense? Yep. So you're like, um, you know, you've we'll talk about this, but you've connected your email to the new ChatGPT kind of models and stuff, and ChatGPT is now doing stuff. I'm not saying that it does this, but in this 0 to 1 where you're doing nothing but the AI is giving you something. It's those AI tools just kind of acting as sentries. When things happen, they go, oh, something happened. I need to do something. I need to alert the user. So these are good mental models when it comes to thinking about what you're building. Because like I said, the psych one is the psych one is me doing one human input to. And then I get like some, you know, some output. But it's not 10,000 like deep search. It's just controllable in bits and pieces. Sorry. Go ahead. No, I was gonna say the, the ambient agent thing like is there's surely there's still a value of zero because there's someone having to set that up to understand it. But but once it's set up, are we assuming it's more just the action orientated than the setup from that? No. I mean, even if the like as in like action focus as in you haven't had to. Maybe it's set up. We're just talking about from the point of okay now. Yes. Exactly 0 to 1. Yeah yeah yeah. Because but even if there was an effort, a lot of effort to set it up and this thing runs a thousand days, that effort for you to set it up on that one day becomes nothing superfluous. So yeah, that's that's kind of what they're talking about here. But it is interesting. And maybe some people will get ideas for building these like ambient agents. The next one and the last. Did he provide a good example of an ambient agent in play? I can't remember, it was a long video and I was like watching it like late at night. Um, and I've only gotten through, like, you've set something up that as a content creators or marketers and part of their actions are talking about trending topics. And the AI tool knows that every single week it pulls in a summary. It then creates four posts ready to share out of the week. And then literally all you're doing is having a look. And it's just the posts are there. Yeah, they're posted there in the language that you that you like written in the way that you like it because you've trained it previously. And then over time that you know, that one effort going back to the statistics, that one effort becomes zero because, yeah, pretty much you've now done a year worth of posting, which is crazy by an initial setup. Yeah, you could literally do that kind of stuff. Um, and that's a very simple example. Yeah, exactly. You could add much more sophistication to that to make it, like more likely to give you the right results that you want. Now, this is an interesting one. We said at the start, like everything's kind of a rapper and people like are bemoan, bemoan the rappers and say that they're bad and stuff like that. But like, if you think about it, everything is a rapper, like everything that you're doing. You didn't have to create the TypeScript language or, um, you know, build the computer like everything is a rapper on top of something. But, um, in any case, if you are going to do that, especially with AI, this is, uh, an interesting kind of framework to look at. Um, either you're doing stuff that, um, you know, you have to like, get the context. So this is what good rappers kind of look like or what they call the thick rapper. So it's thick because it's like harder to attack. If it's a thin rapper, it's like you simply created a chatbot and you're using open AI and just letting people use that. Like there's not much defensibility there, but making it more defensible. Like imagine it's context dependent. It's in the healthcare space, like we'll talk about later and you have your knowledge. You're working with GPS, you're working with healthcare professionals. So that's the, um, the context. You're presenting that context to the model. So how you actually present it, like there is nuance to that and um, specialization there, how you orchestrate the models. Are there multiple models or um, you know, this 4.1 model, what specifically is it being asked to do in that kind of workflow in the healthcare setting? How then do you present? And this is where, you know, a lot of your expertise comes in, is like, how do you present the information back to the user? You could have all that first part, but if you're not presenting it well, then you know it's a recipe for disaster. And then finally, how do you enable workflows? And this is not just okay agents doing everything, but how do you make it easy for users to actually, um, do and get advantage from the AI? So if you're doing all of those things in a special area or specialized area like healthcare, law, finance, you have much more defensibility than, you know, just doing a simple wrapper that anyone can copy. So rappers are not bad. Basically, yeah. I mean, we've talked a lot about that before. I mean, Manus is a great example of that. Absolutely. Obviously a very again, a very simple example of it. In fact, even what we're looking at doing with the competition, which we'll talk about shortly. That is exactly ideally fitting that same boat. Yeah. Exactly, exactly. Is that the terminology fitting the same boat is cool. And yep, that's exactly it. So, uh, what, uh, what what you got next? What you got next? We do some quick fire news. Go go go go go. That was really good. Um, kicking things off, uh, Mistral just launched a closed. Closed a closed client, a closed client. I love a closed client. A coding client, Mistral code. Um, so all the my fans out there of, uh, open source tools, they have launched a, um, now ability for you to integrate with Mistral with with your own clients. And so to start doing some coding. Yep. Um, there's some initial benchmarks have come off. It's it's not it's not the it's not, it's not the great. But you know, you're comparing it to the likes of even things like, um, you know, R1 and um, and Grock for example, like these are open source, but they are improving it day by day. It is literally they've started this. It's it's done something. They've launched it out as a sort of a beta program for people to start playing with. And they're going to start training it more and more and more. So that's really cool to see. They've, um, Mistral still hanging on, um, and going strong in the open source manner. It's so funny because like open source, like and it is this back and forth like, okay, the private models are the best. And then, oh my God, look at like, um, the llama models and like llama 70 B and then like, wow, it's just as good as like the 400 billion model and stuff. You can run it on your computer if you've got like 64 gig of Ram, and then Mistral comes out with their stuff and it's just this back and forth, but so many choices for people. It just speaks to that importance of that middle ground. Going back to the wrapper argument, Someone needs to translate that. What that means back to users? Definitely. So. But hey, I like that they're doing it. And speaking of translating things back to users. Yeah, anthropic is doing something quite. I think it's kind of funny. Um, so then they're writing a blog using its own model to write the content? Oh, no. Obviously with human oversight, someone's reviewing it. And this is coming back to actually, uh, you know, that I guess the 1 to 10 effort kind of, uh, diagram that you showed us a moment ago. So they're obviously the tool they've actually launched, a product called explains. Um, that is a thing that's sitting on their website, which is generating a lot of their AI model family core populated by posts, technical posts relating to various Claude use cases, so giving you insights on how to use it, whether it's coding, whether it's, you know, general research or general tool use. They've created a blog that is updating itself in inverted commas. Actually, as you as I said before, they're using a human oversight. But as it learns and picks things up from people using it, and people who have allowed their use cases to be to to train, retrain the model. They're pushing that content into the blog. Can I just say that this is interesting and it kind of speaks to an understanding of like AI. So I'll give you the scenario, people think, uh, and it's rightful to, to think this, that as soon as I type in something, oh is it training the model. Right. And so there was this concept of reinforcement learning with human feedback. And that's how the models were actually trained. So they did a lot of this testing in the background, seeing what the AI can do and help with humans in the loop saying what's good, what's bad task. Um, you know, tagging things and stuff. So that's the background and people that model is still being used like. Yeah, immensely. Especially for coding. So so that model is that's all the training that training happens before you, the user actually gets to use the model in front of you on the screens that we've got here. When we're using it, that's inference. That's not training. We're using it at the point of where you know the end users are. There wasn't really and you could manually create things yourself to have, you know, the thing feed back into itself, have your own database. But having a model like that where okay, it's off doing its own things and all the humans are doing is just going, yep, that's good, that's bad. No. Right. More like this. That's the kind of future of work that it's. Some people have played around with that. But it's nice to see an official one when people play around with it. They're the minority of minorities that are actually, you know, minority is AI users. Minority of those users that are actually doing advanced stuff is even further. Um, but by making it available like that, oh, sorry. By announcing it and saying, you know, we're doing it within the actual front end of the code, uh, the Claude model. Um, I can see that kind of a workflow being, okay, you get to hire literally a, a journalist, a writer, a content creator and stuff, and it's just trained on your data, and then you just go, yep. No, I don't like that. Do this. Like it. Knowledge work is risky and scary right now. Oh, yeah, very much so. And as soon as we get to that Minority Report. Yeah. You know, it's, uh, it all ends. Um, yeah. Love it. That's really cool. So I think that's a really cool use case for them. Um, and, you know, giving back to the showing, showing their use of the tool in their own business as well, which you don't really see very much, which I'm sure they actually are doing. Um, but yeah, that was a nice little story there. Um, yeah. Now for some quite a few little open eye type of updates, uh, for so ChatGPT um, gave a nice little update for all of its plus users in the last week. What do we get? Um, quite a lot of things, actually. It's actually beneficial for businesses, for business users and for individuals. Forget if I'm a plus or a pro user. I think I'm a pro. I'm a plus user. You're a plus user. I'm not. Because pro is the highest right. Potentially pros. The highest I believe. Um plus. But they're bringing it um, they're bringing all the stuff down for the free users, which is great, of course, because it was originally for four plus users. How would they get more memory? Yeah. So we talked about codecs a couple of weeks ago, maybe last week. Yes. Um, which is a nice new tool. And as we were talking about the, um, you know. Yes. This is great. No, this is bad kind of training model. Um, this is how Codex was actually built as well. Um, that is being rolled down, uh, rolled out now for plus members. I think it was in beta before. Now it's like full access, and I'm pretty sure they're rolling it into free users as well. Um, but even more importantly, GPT free users are getting more memory, which does beg the question for a lot of people. Is it worth them upgrading to the Plus or Pro? Pro or plus subscriptions? Um, given how much you get out of the free version of ChatGPT? I mean, like even from a search perspective, from a research perspective and, you know, using models like for us, there's a lot to offer now for free users of ChatGPT. And this is great. This is why we're starting to see those little dips in Google search results. Because How these clear, the signs are there. Non-open source models are becoming more relevant to general consumers who don't want to pay the money to use their tools, and there is so much you can do, particularly with an increase in memory usage. That is awesome to see. Which means faster, um, actions, you know, more more, um, you know, follow up questioning without it losing its memory. Um, all that type of stuff, all those sources, all that frustration, all that. Yes. Exactly. Right. So what's next? Nice little free one there. Continuing on the ChatGPT open AI model, as I mentioned, there's a lot of things that have just been announced for business users, stuff that I actually haven't had a chance to play around with because this only came out literally yesterday. So they've released a whole bunch of new internal connectors, which are a lot of them are in beta. Um, but allowing you to connect your open AI tool to all of your sources or your, you know, like your Google Drive and Gmail. And I heard SharePoint as well. SharePoint. Yes. That's it. So you've had a SharePoint Dropbox box. You open it up and I can see into your box now. Yep. It's all the private files in your box. Um, in addition to Google Drive, which you could do in linking before, but now it actually can connect to the actual drive itself. Um, so this is great for that sort of everyday tasks and research where you're doing a lot of back and forth between, you know, if you're a Gemini user, you're probably using that now. You can use open AI and those tools as well and, and engage with it from the open AI chat window without having to switch platforms and find links and titles and copy big chunks of text and drop it. Exactly. So that's really interesting. Um, the you know, two things that stand out from that is one is like privacy. How much are you okay with, um, very true here in Australia at least. Anyway, um, and look, this is easily solved if OpenAI just says we now have models that are global, you know, we're like, they they kind of have it through Azure. Um, so Microsoft has OpenAI models sitting on their servers so you can keep things here in Australia. But like given. You know how, um, ChatGPT is also developing things. If there was a model. That just literally just sat here in Australia, that could be a really interesting. Thing for folks to use other businesses, they don't care if the data kind of moves around. They don't have that restriction, so they'll try the SharePoint one and whatnot. But one thing that I will say that because we we've gone down that path two years ago looking at companies where we're connecting to their SharePoint, and then they start asking questions and they get ambiguous answers. The reason for that is that the data that the AI is looking at is disorganized. And, you know, if you're looking for something to do with a HR policy, but then it's also picking up because you've literally said, look at all of these folders and there's non HR folders and it's trying to pick up some stuff from there OpenAI as well. This is this is any model any model. So I'm not talking there was no ChatGPT you know connected to SharePoint back then. I'm just saying that the model when you built because you you didn't have this before. So when you build as a developer, a model that connects the SharePoint that runs through an AI, GPT, whether it's open AI or not. The issues. Even two years ago, I just am saying that people will face that now because they'll connect to their SharePoint, and then they'll ask questions and go, why is that answer ambiguous? Why does it have our old description? And then they realize that, oh, our data is messy. But the good thing is that doesn't mean you should stop. You can actually use AI to help you clean up the data, find the ambiguity and all of that kind of stuff. So don't let this be a reason just because I've said that that's the issues that you face that, okay, we've got to stop here. No, there's ways around this kind of stuff. And AI is so funny that it it actually helps you overcome the hurdles that you find when you start using AI 100%. And I had an additional thing too, where I thought you were going with it. Is that even around stuff like if you do a bit of a model comparison to how it even looks at your data and pulls that information, um, it has become quite known that open AI tends to kind of fill in the gaps more with made up responses or external source responses than any of the other tools out of the paid tools, that is. Um, so there was a comparison done between OpenAI and Claude, for example. Claude was more likely to say, I don't know or I can't find it. Then OpenAI was or ChatGPT ChatGPT was more likely to either find an external source or just make something up if it's not allowed to go external. And that is, that can be a big problem. Absolutely. Especially when you're, you know, if you're using health or finance where you're very reliant on accuracy. Sometimes if you're not training the model properly, you don't have the right protocols in place. It will fill in the gaps. It speaks to that sycophantic kind of problem that they had just a few weeks ago, and they only sold a few weeks ago, maybe a month or two ago. Feels like weeks, who knows. But where? People yesterday could have been yesterday, where people were giving it literal like this is beep like sorry, shit on a stick. Yeah. Uh, my my business plan is to to put shit on a stick and it would go. What a wonderful idea. You are pioneering. Blah blah blah. So being overly sycophantic and you know how they stop that they literally change it's custom instructions. They were clear about this. They actually presented it and they showed. Here's the things that we changed. So you could do a diff comparison between what it was now the instructions versus what they had. And they literally just put in don't be sycophantic. Like it's so funny. Like I mean that is a rule that they put in. And yes, it's stopped doing that, but because it's just a prompt rule that sits in the background, the ability to to hack it potentially is is strong anyway. Um, interesting kind of stuff. So it's very cool. So, um, so what you can now do is essentially run deep research into your own your own tools, into your own documentation, your own databases, and even in your SQL, SQL in the background, which is really cool. That's kind of crazy. Um, you can connect, you know, contract repositories, um, and your cloud storage services. You can combine buying custom sources with web and existing connectors to get a complete picture of all of the topics or thematic thematics across all of your insight or your data. So a really cool things you can do there. You can capture summarize team meetings. Even so, you can connect into your, um, your, your conversations and your, um, actual virtual calls that you're having turn brainstorms into sort of next steps and plans and get exported with direct access into your over your email, um, and search references and past conversations. So it gets a full idea of your history. And that's obviously even talking about some of those things really highlights the risks that you were mentioning. But at the same time, you know, creates some really cool and simple opportunities to integrate AI, where it would usually be quite complex. If you're not in, say, the Google ecosystem, as in Gemini and other tools like that. Fantastic. Um, so that's really cool. And that was that's sort of the really big things, um, that were out in the news today. All right, folks, and another thing that we're doing is really exciting actually, Chris, isn't it. It's very cool. You can see it on screen here. This is the world's largest hackathon. Um, do do do do do. 86,769 participants. We spoke about this last week. It was at 50. So already there's another 30 or 20,000 people who have joined up, which is kind of crazy. Kind of crazy. There's a whole heap of the rules and stuff here. Different kind of prizes and challenges. Uh, the silly shit challenge. So using Reddit's Developer forum and Bolt to bring your wackiest, weirdest and silliest ideas. Um, we're just going for the main challenge, which is, uh, basically building an app majority with bolt. You can do some stuff outside, but it has to really be built with bolt to showcase what it can do. And you want to tell the good folks a little bit about our idea. We're not going to show anything just yet. Um, but it is going to be made public because we have to make it public. But yeah, it's, uh, it's really, really interesting because it's and it might not be a challenge that a lot of other people are facing internationally. Um, but definitely here in Australia, one of the biggest problems we face when it comes to health and the tools that are there available for the general public to even understand and manage their health are next to none. We have the My Health tool, but it is problematic. It's problematic with so much data in there, and there was trust issues that plagued it from the very start, um, that some people don't even really use it even though it's there, their data is there or understand how to use it. And that's the interesting that's how issues like massive, right. Like it's like you've got all this data and there's two sides to the health journey or the making a well Australia and it's like the providers and practitioner side of things. And there's the patient side. And whilst there's been a lot of tools, we're working with someone awesome named Sal uh on this. So shout outs to Sal. You'll, you'll see here on the show soon to very much so. Um, but we have seen a lot of tools coming from that top down from the government, from, uh, well, not government tools, but like, um, support from the top down for healthcare providers, practitioners, etc.. Um, but even then, there's still problems, right? Even though with the tools that they've got and for the patients that there hasn't been as much. And so they get lost in their health journey. And so what we've got this idea to create a two sided application tool which both supports the general public, the general consumer, the general person who has the health problem or other things to face to go through whatever their medical journey is. Understand the insights and information utilizing relevant AI tools in the market. At the same time. Have a platform that sits on the on the sorry on the specialist side of things to be able to immediately understand what the customer is going through and create that what the word that I love that you've been using a lot. Yeah. Better empathy when it comes to engaging with a patient. Yeah. Um, that, that one of the ideas is that there's a whole lot of stuff that happens, um, on the patient's, like, journey, uh, say, before the consultation that they go into. And given that, you know, there's information that the GP has, doesn't have it's like messy. Whatever they've collected, they have to like really quickly go through that, um, to understand what the patient's going through. And then there's also the feedback from the patient. And when you've got these short like 15 minute or whatever it is, windows of a consultation, that's not enough time to quickly gather things. It's really funny. We are able to keep people like, well, in spite of all of these like hurdles when it comes to all the health data out there. And yet we've got this amazing generative, uh, I kind of tools that if pointed in the right way. And obviously we've got to tackle privacy, we've got to tackle all these other kind of challenges. Part of it. But when you are doing it in the right way, you can actually harness all of that to make the lives easier, mainly for the patient. But as part of that, you know, the GP's and other practitioners will be brought along with it. You can see here on screen the prizes for this kind of stuff. There's um, a whole lot of chatter I've seen on bolts, discord, a whole lot of sharing of information. Um, in terms of like, you know, people showing what they're building and vibe coding with other builders and stuff and also describing their businesses. So some really interesting stuff going on there. Um, you know, $100,000 US grand prize, um, for that third place gets 50,000, second place 75, fourth is 25, going all the way down. Um, and even many other prizes for bonus prizes. So really interesting stuff. It is cool. I'm really excited for this. I just love the idea. I mean, uh, not so much from the specialist side of things, but even from a patient side of things. Being able giving these people who are generally who don't actually understand the lingo and, you know, the complex journey for some very tough situations that they're going through to be able to go into a platform and instantly understand what is being said to them, what it is they need to do and what they've taken. Not not what they need to take. Not from stuff that the AI has prompted them to do. It's pulling from direct information that the doctors have supplied them. But just putting it into a place that allows them to understand it in a really, really easy, simple way. Tell me, like I'm five, you know. Educate me like I'm five years old. And that's what it's going to be able to do. And I think that's really that's powerful in itself, pretty much. Um, so watch this space. Much more on that coming soon. Um, over to the next thing that we want to do, which was comparisons. How's the comparison on your side going? Yeah we did. Comparison is going strong. Gemini is taking the longest, but we'll do a quick summary of it. Let's have a quick chat because I do know what Gemini is probably going to show anyway. So what we've done last week, perplexity, uh, obviously released their new, um, their new research, which is called, to be specific, the labs and labs was just a new way of analyzing data and research that's out there and presenting it in a much more meaningful, almost project slash structured way for you to either utilize in, you know, documents, formats, insights, tools, research papers, white papers, all that type of stuff. And it's actually really, really interesting the way that they present the content. So what we've done here is we've used a bit of a like a really simple query. I would usually use a much more structured query for this. But just for example, for the, for all intents and purposes, um, based on something that you mentioned before, we were talking about the Australian market in terms of adoption of AI and all that, and you only found insights up to the end of 2020 for December. So I was like, all right, cool. Looking at understanding the adoption rate of AI Australia businesses for Q1 of 2025. So we've put it through the four big players of the world. Um, our first one is good old ChatGPT and I would open up to ChatGPT. Um, I've used the oh three with deep research. Uh, so a bit of reasoning, deep research model. Um, put that prompt in. The first thing it did asked me was, hey, did you want to do anything in terms of the other industries and sectors? I said, I want to compare all of the industry. Tell me generally what's happening in the market and even show me some, some comparison with the trends for those particular industries. So it was the only one that prompted me back, which is interesting, which I like, which is great. This oh three. It's that's what it does. It's a bit of a, a genetic workflow. It queries you, which is great. So when I look at this, you can see it's even pulled through the data from that we just showed before but 2024. So it's not actually built anything around 2025 up front. But it's giving you a nice overview. It does what it does best. It creates tables. The latest report was only up to the second, the last part of 2024. It was but that's just using this particular um that one source. Right. Yeah. There are other sources out there that I've talked about the market, um, for more up to date. So adoption rate industry, let's see how it's written. I uptake varies widely by industry and services. Yet we know that finance and banking great. So it's done a bit of a comparison with big blobs of text healthcare manufacturing services. So look this is really good content. It's provided sources for everything. A little bit out of date. Everything's in here is talking about 2024, 2024, even though I specifically said 2025. But you know, good insights here. Here are some investment levels and key trends in 2025. Again though, even though it's saying 2025, look at this and 2025 it's still referencing 2024 is one. Yeah, I guess they've just got to find like the latest that stuff that it can find. Maybe there hasn't been as much written but you know that's but it's a lot of text. It's a big text dump. Really really cool really interesting. Gemini um, Gemini's reports done now. So whoa. That is a big executive summary. Monster, monster. Some nice tables there overall 2025. So again, data from 2024. Um, oh let's talk about sources this ended up using I think it said it was using 24 sources if I remember at the bottom. All right. Current adoption again lots of text a couple of tables putting some interesting insights. High adoption really simple. It's easy to read, which is good. Simultaneously or dynamic adoption. Oh, wow. So much more text. Yeah, it's a really thorough insight you would have to read. You're gonna have to get your eye to read through it. Yeah. Put this back through the I. Please summarize this 20 page document. Let's see at the bottom of how many sources it pulled. Uh, does it say there. Oh, it doesn't, doesn't really say. Say. I mean, that looks like about 20 or so. Yeah. I'm pretty sure it was something about 24 or something that I ended up using. Okay, so Gemini 24. ChatGPT I have 36, I think we said or 33. I think it said at the bottom. What does it say at the bottom? Again, not sure. Not sure. Click on the sources. Click on the sources. And then on the side there you can see 33. Yeah. Cool. 33. 33 sources Gemini was around 24. Claude, um, do you want to pull up your claw? Oh, no. I've got it. Here. This is yours. Really, really brief. It didn't actually go into much detail at all. Again, we did the deep research. You can see the thinking that it had there at the top. And you can see the results. So you click on that ten results. You can see the results in 2024 Q4. Not as much 25. Yeah, it did say it was in beta. It was. Yeah, that's true so much in beta. Very high level, very easy to read. Summarizes nicely. Not too bad. But lastly, if we do the big, big old comparison, which is perplexity. Now this is where this looks impressive. So what they've done is really simple. Again articulations. They've found up to 43 sources here that it's actually pulled through. And what it's done is actually accumulate these into nice tables for us. Yeah. That's impressive that it actually created these tables. It wasn't grabbing it from the research. It grabbed the data and built these tables. Exactly. Well so can you click on those tables. So show me. There you go. You can download. You can. Not bad estimates, large scaled eyes, weekly eye usage and revenue benefits. So talking about the adoption rates across business adoption rates by industry. Wow. Um so agriculture versus business. This is great. Not many farmers are unfortunately using AI. But you know we'll have to do something about that Australian I trends. So we are growing Australia so Q2 wow it's exponential not not so Australia needs to really step it up. I reckon we compared that to us, which could be a follow up prompt. There'll be a. So then we go back into the text things. But you know really nice structures and visualization which the other tools are doing. I don't think the quality of data is any better or worse, it just structures things a little bit different. It does put reference points as they all do that. Um, but right at the end you can then do a download file for that and generate that report for you in a nice summary format, which is really, really interesting and very cool. I like I like ChatGPT for the, the length of, uh, the data that it's got, um, perplexity just wow, impressive. In terms of the, um, what it's provided. Gemini also had like some good stuff, but. Claude, what are you doing? Yeah, I, like you said, it's new. I mean, it's focused, it's new, but sorry, it's new. I wouldn't be doing any deep research in Claude, but I'd probably ignore that for now. Yeah, but, uh, yeah, we'll probably do some more deep dives into that in future episodes and stuff. Stay tuned for, I guess, more of the hackathon stuff that we'll get to show as well. Any kind of final thoughts from from you, Chris, as we finish up episode 32? Stay safe, stay creative. Enjoy the videos out there. But you know the deep fakes are a raging wild beast. Be safe with what you're watching and what you're believing in. And for the 38% of you, 38% of businesses out there that are not using or planning to use AI, we're going to talk. So watch out. We're definitely we're coming knocking we're coming we're coming knocking. We will be there on your front step. That's it. All right. Thanks very much folks. We'll see you again. Thanks to Fix Global for the market site here. And, uh, yeah. Just thanks for being you. See you later. Bye. Worldwide tech trends blazing. Get you to fire. A.I., A.I. rising to the sky. Digital world. Why not centralize?
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.