Leveraging AI

249 | Fast-takeoff fears, $1 B Disney-OpenAI pact, GPT-5.2’s pro-grade leap, Gartner yells “block AI browsers,” and Apple bleeds AI talent—our mega AI recap for the week ending on December 13, 2025

Isar Meitis Season 1 Episode 249

Is AI finally ready to do your job — better, faster, and cheaper?

In this week’s Leveraging AI news recap, host Isar Meitis unpacks a flurry of groundbreaking developments in the world of artificial intelligence — from the release of GPT-5.2 to jaw-dropping advances in recursive self-improving AI (yes, it’s as intense as it sounds).

Whether you lead a business, a team, or just need to stay ahead of the AI curve — this episode is your executive summary for everything that matters (and nothing that doesn’t).

We’ll also dig into the billion-dollar OpenAI–Disney partnership, how real users are actually leveraging AI in the wild, and why the Fed is finally admitting AI is changing the job market.

In this session, you'll discover:

  • The GPT-5.2 release: performance benchmarks and real-world capabilities
  • Is GPT-5.2 better than humans at actual work? (71% of the time, yes)
  • Why OpenAI’s new “not-an-ad” ad rollout caused a user revolt
  • OpenAI x Disney: Why $1B is being bet on AI-generated Mickey Mouse content
  • GPT-5.2’s weak spots and where Claude Opus still dominates
  • What Recursive Self-Improving AI means (and why Eric Schmidt is nervous)
  • AI designing its own hardware: A startup that could rewrite Moore’s Law
  • New usage data from OpenRouter, Microsoft, SAP & Perplexity – how people actually use AI
  •  Why prompt length is exploding (and what that means for your business)
  • AI agents in browsers: the productivity revolution or a security nightmare?
  • Databricks proves AI sucks at raw documents (and how to fix it)
  • The psychological bias against AI-created work — it’s real
  • Claude’s new Slack integration: is this the dev team you didn’t hire?
  • Apple’s AI brain drain & why it matters
  • Gartner says: Block AI browsers (for now)
  • AI and unemployment: The Fed finally connects the dots

Want to future-proof your team’s AI skills?
Isar’s AI Business Transformation Course launches again in January — a proven, real-world guide to using AI across content, research, operations, and strategy.
👉 Learn more & enroll here:  https://multiplai.ai/ai-course/                

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker:

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and we had another really exciting week behind us. First of all, we got the release of GPT 5.2. That is. A very interesting model, and we're gonna dive into that as the first topic then we're also going to dive into multiple researches from several different leading labs on how people are actually using AI and through what purposes and how it has evolved through this past year. And we're also going to dive into recursive self-improving ai, which suddenly became a big topic from multiple different sources all in the last 10 days or so. So these are gonna be our three main topics, and then we have a lot of other small topics to cover with lots of new, interesting releases and features. And lawsuits and interesting partnerships, including Disney and OpenAI. So we have a lot to cover. And let's get started. As all the rumors suggested last week, GPT 5.2 was released this week, and it is a model that is focusing on something very specific and instead of telling you what that specific thing is, I'm just gonna read a few quotes that either was as part of the release notes from OpenAI or that several of the leading figures in OpenAI shared in different interviews and press releases and then it will all become very clear. So the first one is from the release that says, overall GPT 5.2 brings significant improvement in general intelligence, long context, understanding, a gen tool, calling and vision. Another quote from the release, we designed GPT 5.2 to unlock even more economic value, better at creating spreadsheets, building presentations, and handling complex multi-step projects. Fiji CMO OpenAI, CEO of application said we designed 5.2 to unlock even more economic value for people. It is better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools, and then linking complex multi-step projects. She also said it is the most advanced frontier model and the strongest yet in the market for professional use. Greg Brockman said GPT 5.2 is here. The most advanced frontier model for the professional work and long running agents. Brad LightUp said, introducing 5.2, our latest model and the most capable for knowledge work sits a new state of the art across many benchmarks. So if it's not clear, the focus, at least from a messaging perspective, is real life work and agents. That is definitely the focus of GPT 5.2. This is where they invested most of the resources in aligning this model, and this is where they're investing all their focus when it comes to messaging, why this model is important. One person that the overall feedback over the web was very positive, especially from people who had early access, such as Ethan Molik that I follow all the time. And Ethan Molik was very excited about its ability to run long and complex task based on one shot, single, not too sophisticated, prompt. The example that he gave, he's, he asked ChatGPT 5.2 Pro to generate a new type of shader and with a very simple prompt, he got a complete running, working shader that included the creative interpretation of what he requested, the mathematical precision of actually developing the capabilities behind it and the code to actually run this properly. He specifically called it a challenging assignment and not just a toy example, meaning this is something that can be used in real life that he was able to create with a single shot, relatively simple, prompt. Molik was also very positive about the success of the model of GDP Valve, which is a benchmark that OpenAI themselves invented. But he said the GPT valve is probably the most economically relevant AI benchmark so far. For those of you who don't remember what GDP Valve is, it's a benchmark. That uses 44 occupations across nine different sectors. It reflects actual real life professional work versus different puzzles and specific structured things that it needs to solve. It measure tasks that usually take humans four to eight hours to complete, and 5.2 outperformed humans 71% of the time judged by other humans. So when other humans got to see the responses, they don't know which one is aon, which is not, they just can pick the response they like more, and GPT 5.2 performed 71% of the time better than humans when other humans evaluated the work. This is very significant. It also performed the work 10 times faster and at 1% of the cost of humans performing the same tasks. to put things in perspective, the next model is Claude Opus 4.5 with 59%, so it's 71% compared to 59%. That's a very big spread and the spread to Gemini three Pro is even higher. Gemini three Pro is at 53.5%. Now, I am not a huge believer in benchmarks in general, but it's still a good way for us to understand how the models rank, at least on specific topics. So on the SWE Bench verified, which is real world coding tasks, g PT 5.2 thinking is at 80%. Claude Opus 4.5 is at 81, and Gemini three Pro is at 76. So the three are very close together, very high on that ranking. On the GPQA diamond, which is graduate level science, GPT 5.2 thinking is at 92.4%. The pro version is at 93.2%. Gemini three deep think is at 93.8%, so slightly higher. And Claude Opus 4.5 is at 87. a little bit behind, but all relatively close at the AIM 2025, which is a math benchmark with no tools. GPT 5.2 reached 100% accuracy. Again, without using tools on the arc, a GI, which is more of an abstract reasoning, it's like these kind of puzzles that the systems cannot prepare for. It just has to understand. Uh, so it really measures the ability of these models to understand a problem and develop a logic in order to solve it. GPT 5.2 thinking scored 53%, 5.2 Pro scored 54%, again, compared to Gemini. Three deep think with 45%. So a very big spread. And Claude Opus 4.5, that's 38% humanity's last exam, GPT 5.2 scored 36, slightly still behind Gemini three deep think with 41%. So again, the focus of this model was very clearly to develop a model that would be good at real life work versus be really good at benchmarks. We'll have to wait a little longer to see if this focus actually translates into real value for people. You know, the model right now is only in early stages on the LLM Arena. On the LLM Arena. Right now it's not doing very well, but I assume we'll start to see it rank higher in the next few days as more and more people start using it, uh, and testing it on the LLM Arena as well. As of right now, it is definitely not doing well on the LLM Arena. Which conceptually tests for real life use cases because it's people loading their own prompts for actual things they're trying to do. So whether the promise from OpenAI is going to materialize, we will be able to measure better within the next few days or the next few weeks once we start seeing real feedback on how it is from the initial use cases that I put it through, I was not impressed. It actually was able to fail me several times on a few very specific things that I was trying, including getting correct. References to topics that I was researching and it actually did worse than 5.1 and even five on getting me accurate links to relevant articles on something that I was researching. And yes, that's a very specific edge case. So that doesn't imply on anything else. I did not yet try it on the things that they're saying it does very well, which is spreadsheets and slides and multi-step projects. I. Right now for those things, my favorite tool is Claude 4.5 Opus. Uh, and the same thing for coding. I find Claude Opus 4.5, an incredible coder, and I'm going to record an entire episode on the project I've done with it in the past few days. Uh, but that's for a later episode. But in general, a new model from OpenAI. By the way, they specifically said several different people that this is not related to their Code Red Event that we reported about last week. This is a model that they've been working on for months that had this release date for months. And so the code red thing will probably be an early Q1 release, potentially January, and this was an important and yet incremental upgrade to. PT 5.1 that was scheduled for this release and have been worked on for a while now. And since we're talking about OpenAI, a few more interesting things from OpenAI this week. First of all, they have quietly been testing a new memory search tool that basically allows you to search through the growing list of items in the memory that it saves about you. So one of the strongest features that church BT has is that it has a really long memory of you if you have been sharing stuff with it, and I. Intentionally do that. I actually teach ChatGPT about me, so it provides better and better answers, which means the stuff it has in its memory about me is now really, really long, and the ability to search that will actually be very powerful. This was shared by testing catalog, which is a company that monitors what's actually happening and being released across the board, including behind the scenes pieces of code and exactly what they're doing now. The new tool mirrors a very similar search that currently is available in the Atlas browser on Mac. So those of you who are like me, who are Mac users who are using the browser, there is a browser memory research tool that you can actually go and see everything, and a very similar implementation is. Potentially coming to the memory inside of ChatGPT, which I think will be very helpful to be able to find weird reasoning of why ChatGPT does something specific based on its memory. You can find what the issue is and then either fix it, delete it, or at least know what it is. So useful tool that I assume will roll out to all of us in the next few days. A big piece of news from OpenAI this week is that they've signed a very interesting deal with Disney. So the way that deal works is Disney is going to invest$1 billion into AI and also a part of a licensing deal that will allow ChatGPT and more importantly, Sora to remix and create videos of all the known Disney characters across all its different brands, such as, you know, obviously mickey Mouse, star Wars, Marvel, Pixar, like all the characters that we all love and that everybody probably wants to create videos of. Now you'll be able to do this in Sora with a licensed approach from Disney. Now the deal itself is really weird. On one hand, yes, it provides OpenAI with another billion dollars, which is really great for OpenAI it. The specific article that shared this information comes from the information and they said that this$1 billion is a small amount, but it is very, very important because OpenAI is planning to burn through$115 billion. Between now and the end of 2029. And so they need to continuously raise large amounts of cash in order to finance these crazy losses that they're planning to have. But what the deal doesn't share, or at least wasn't share publicly, is exactly what. Disney is actually getting out of that. So Disney invests$1 billion when they don't have huge amounts of cash on their balance sheet. As of the end of Q3, they have just over$5 billion on their balance sheet, meaning 20% of that is going to go to open ai. That's a very large percentage for an investment and it doesn't clearly state exactly what they are getting back as far as the licensing deal itself. So if I create a video of Luke Skywalker, what does Disney get in return? There are hints to the fact that they may get additional shares at the current valuation if this happens, but that means that they only make money if the value of open air actually increases, because otherwise they're just getting it at face value and there's not much value in that. Not really clear what Disney's getting out of this. They're definitely betting a lot of money on this deal, what they're getting for sure, and that's something I mentioned on this podcast several times before, is they're getting more exposure on a future platform that will allow the next generation to know more about Disney characters. So I definitely see that as a benefit. I assume there is some kind of a licensing deal behind the scenes that will allow Disney to make money when its characters are being used in Sora and or ChatGPT. And as I mentioned several times on this podcast, I really hope that's the direction everything will go, that we'll find a way to have AI have access to all the ip, but compensate the creators one way or another, similar to how Spotify figured it out and now is paying creators and everybody can enjoy the music in a much easier way then buying CDs are vinyl records, so hopefully this is the direction that it is going. This may lead, and I don't know how far this envelope can be stretched to people, individuals, or groups, creating new series or complete films about the Disney characters because now it is available in Sora, and if it is available legally, I can create as many scenes that I want and I can stick them together and then potentially release my own episodes. I don't know what does that mean from my exposure if I do this from a legal perspective with Disney, but if the tool legally allows it, that definitely opens the door for stuff like that. Another big news from OpenAI this week is Slack's, CEO, Denise Dresser, defected from Slack, which means Salesforce, and joined OpenAI as the Chief Revenue Officer. OpenAI has seen a incredible growth in its enterprise segment in 2025, and they are now pushing that even further to figure out how to drive significant revenue from this channel. Dressers experience in salesforce for 14 years and in Slack as the CEO since 2023 is the perfect fit for stuff like that, right? She has amazing experience in how to drive revenue from implementing enterprise level software solutions that drive additional value to companies open AI's, CEO of applications. Fiji CMO praised the hire in an internal memo that said, we're on the path to put AI tools into the hands of millions of workers across every industry. Denise has led that kind of shift before and her experience will help us make AI useful, reliable, and accessible for businesses everywhere. So this is just another big hire from of a very prominent, dominant figure in the software world as it comes to implementing AI in enterprises. Another big one is obviously Fiji Cima herself, who became the CEO of applications. This is just another example of the crazy talent wars that are going on between the leading labs and companies across the board. In the past few weeks there's been several major departures from Apple to all the major labs, including meta and AI and anthropic. So. Not good for Apple. We didn't dive into this in any of the episodes but it's all in the links in the newsletter if you wanna learn more about this. But overall, a big win, I think, for open AI when it comes to developing the right relationships with enterprises For ChatGPT implementation, the bad news for OpenAI this week came from a big backlash of users online that were bitching about the fact that ChatGPT started showing them ads. One of these was Benjamin de Craker, that posted on X the following. I'm in ChatGPT in parentheses, paid plus subscription asking about Windows Beat Locker, and it's fucking showing me ads for shop at Target. Yes, screw this, lose all your users. And then he post a screenshot of that offer and there have been multiple other users showing the same thing. OpenAI very quickly came out and said that this is not ads, that they're not testing ads, at least at this point, and that they're not getting any financial compensation for this. The idea was to promote new applications inside of chat PT, to tell people that they actually exist and that they can use them. Initially, OpenAI denied it even exists, but then later that day, Mark Chen, the chief research officer for OpenAI, posted on X. Admitting, I agree that anything that feels like an ad needs to be handled with care and we fail short. We've turned off this kind of suggestions. While we improve the model's precision, we're also looking at better controls. So you can dial this down or off if you don't find it helpful. ChatGPT Head, Nick Turley, followed up on X. There are no live tests for ads. Any screenshots you've seen are either not real or not ads, and yet there were several other examples of unsolicited plugs of Peloton workouts and again, target shopping that is very clearly real. So what does this tell us? It tells us three different interesting things. The first one is open AI is really bad at pr, and this is a very basic mistake that just shouldn't happen. You cannot release something like this into the wild without testing it initially and telling people that it's coming and explaining what you're doing. It just looks bad. The second thing is that the feature itself makes absolutely no sense because it is showing these quote unquote, not ads when the context is completely irrelevant. If you are looking for something in Microsoft programming and you're getting ads or promotional things to go and shop on the target application inside of chat pt, it makes absolutely no sense and they have the context. They know what the user is looking for. So this just looks very, very bad from an implementation perspective as well. The third thing, and this is to me the most interesting aspect of this, is that people are really not willing to see ads inside of ChatGPT, especially when they're paying for it. And it is very obvious that OpenAI and probably the other labs as well are looking at this as a great way to make more money and as a legitimate channel. And the fact that users are so anti anything like this in their feed should raise a lot of red flags in the open AI universe and driving them to understand how they can actually integrate this in a way that will be accepted by their user base. Maybe the solution is to have a free. Chat GPT that is less limited than the current one that will be ad supported. That will allow them to get to a lot more users who are not willing to currently spend the 20 bucks a month, and will still be able to use the entire platform because it will be paid for by ads. And the last point that I will make about all of this is another place where Google can win big because people already expect Google to show ads. This is part of what we know. Google's also very good at showing you relevant ads that will compliment what you're doing when you're doing it as you need it, which makes a lot more sense than just random ads. So I think Google will find it a lot easier to implement ads in specific relevant scenarios because they have the experience in doing this. They have the infrastructure to do this, they have the frameworks to do this. They have everything they need in order to do this more successfully. And overall, not a good stunt by OpenAI across the board. And since we talked about GPT five point two's release, a few other releases this week. Misra ai, the French company, has deployed their next generation of vibe coding platforms called Devra two, which uses their recently released Misra three models. It is very obvious that 2025 has been the year of code writing with AI and coding agents, and they're just joining the bunch. Their models are definitely not in line with the frontier models. Their models are more or less nowhere to be seen on the LM CS Arena ranking. But it is an open source model that is really cheap to run. they Have two different variations. One with 123 billion parameters that requires a minimum of four H 100 GPUs to run. But the other one, it's smaller brother with 24 billion parameters that can run on your local computer. So the fact that these are open source models that can write code decently well might be relevant to different people. And we're gonna see later on when we discuss about the research about global usage of ai, that open source models for coding and stuff like that is actually gaining a lot of traction. I do think that they will find it very, very hard to compete with the Chinese models that are also open source and currently much better than Misra models. Another interesting release this week comes from the uk, so a company called Loci, which is like Lock, LOC AI, just released their first model, which is a crowdsourced UK built chatbot that is claiming to beat ChatGPT on a few specific benchmarks. They're claiming based on their internal evaluations that Loki or lock ai, I am not sure how to pronounce it, surpasses g PT five Gemini and deep seek in conversational ability and human preference. What does that exactly mean? I'm not exactly sure, do I really think they have a chance to compete with the leaning models? I don't. But the interesting thing is actually the business model. And again, the fact that it's powered by blockchain based user computing instead of let's build a data center for a hundred billion dollars. And so the fact that they're using a distributed blockchain based net of computers with almost no investment in infrastructure. And while allowing it to create a model that is worthwhile mentioning is interesting to me, or as their CEO Jameson said, Britain does not need to outspend the world to lead in ai. We need to outthink it because we will not win the AI race simply by building bigger data centers. A few things that were mentioned about this model by the people who got access to it before it's released says that it is a lot more polite and British than the average models that we know today, but it is lacking almost all the other bells and whistles and other capabilities and tools that we are expecting right now, such as a mobile app and image generation, voice mode, limited tool calling, et cetera. So right now it's a very basic model, but again, I am interested less in the model itself and more in the concept of how it was developed. And it will be very interesting to see if that is actually scalable and can deliver real results compared to, yeah, let's invest$50 billion and build a bunch of data centers. But maybe the most interesting model announcement of this week actually comes from Japan, from several ex Googlers who have built a model that they're claiming is the first real A GI model. But they've built it as a way to build robots to learn independently with no data sets and no handholding. So Integral AI has shown that they have built an autonomous skill learning solution that allows robots to learn new skills without being trained on any of those in areas that are not similar to things that they were trained on before. So in unsupervised trials basically means it's something new that the robot doesn't know that exists. A system taught robots fresh tasks that are uncharted domains, things they haven't seen before. While being able to deliver multi-layered, very nuanced precision, including language thought and very specific dexterity to complete these tasks. They're also claiming that the amount of energy it took to train these robots to do these tasks is similar to the amount of energy it will require humans to learn these tasks. They're calling it a fundamental leap compared to how AI is being trained till today. That is extremely data intensive with a huge investment in infrastructure, that is not required in the way they have implemented it or as their CEO Jad Tariff said. Today's announcement is more than a technical achievement. It marks the next chapter in the story of human civilization. Our mission now is to scale this a GI capable model still in its infancy towards embodied super intelligence that expands freedom and collective agency. What they have shared immediately reminded me of the interview with Ilia Sr. From just over a week ago that we discussed in the episode last week in the interview. Ilia is sharing that what he believes about Superintelligence is not necessarily the sheer immediate intelligence, but its ability to learn on its own, and improve by learning. So developing a system that is better at learning, which seems to be what this group has developed. Now while they are focusing on embodied intelligence, basically robotics, the same concepts I'm sure can be applied not just to robotics and developing systems that will learn very effectively without having access to huge amounts of data. This may not replace large language models as we know them right now, but it might be highly relevant for specific fields. This connects directly to the whole concept of recursive self-improving ai, which we're going to talk about in a minute because there've been a lot of mentions about this in the last few days. Before that, a few more interesting releases. N8N just released N8N version two. Those of you who do not know N8N is a workflow automation platform that gain huge success in the last couple of years, especially in the more geeky, more technical community. So think about tools like Zapier and make and Relevance and so on. So N8N belongs to these, but it's a lot more technical, less user friendly, but hence a lot more capable because it can run code and a lot of other things in much more effective ways. And in this release, they shared some information about the crazy growth that they're seeing. So this is an open source project, and they shared that since version one that was deployed in July of 2023, GitHub stars skyrocketed from 30,000 to 160,000. Their forum members grew from 6,000 to over 115,000 members, and their team, the core team, grew from 30 to 190. I actually, I really like N8N. I find it to be extremely powerful despite the fact it is not user friendly, and I use it relatively frequently. Together with make.com that I'm also using. So when I need something very simple, I go to make.com. When I need something more complex, I will go to N8N. The biggest benefit of N8N, as I mentioned, is that it is open source, which means you can self-host it and then only pay for the hosting. And you can run as many automations as you want without paying per step or per automation. So what's the big deal of version two? On the surface, it looks very similar. There's a facelift to the user interface and there's a side menu that's supposed to make the work in N8N a little better. But most of the upgrades happen under the hood focusing on much higher levels of security and performance. So these are the key things, is. Better performance, higher security, and higher level of stability and consistency as the automation is running. All of them are really important if you want to implement this to actually run operations in your business. I haven't seen any feedback from actual users yet. It was literally just released, but I will update you on what is the overall sentiment after more people share what they found about working with N8N version two. And I'll do our second big topic, which is recursive self-improving ai, also known as RSI. Why are we suddenly talking about this? Because within a week and a half, there's been multiple angles that have addressed this specific topic. The first one was Eric Schmidt, the former CEO of Google, who spoke at the Harvard Kennedy School, and he was specifically talking about RSI as a near term governance problem. Schmidt was talking about recursive self-improvement AI in very practical term versus the theoretical concepts that were available until not too long ago. He explained that today's AI systems still require humans to design the training runs and the models and to create the infrastructure and so on, but he is arguing that this is temporary and that this can change relatively quickly. He was talking about the fact that there is a lot of discussion that this can happen within the next two years. His timelines are slightly more stretched. He's talking about four years, but it's still in the very near future for us to deal with these kind of things. He suggested that AI system capable of autonomously generating new scientific hypothesis. Discovering new mathematical insights and processing new medical fields are likely very close again in the next few years. And Schmidt wanted to sound this alarm saying that as a society, this is something that we need to address and we need to clearly draw the line of how much agency we're willing to give machines to self-improve themselves. So before we dive into what other relevant people are saying and the other facts, I want to give you a quick explanation of what the hell is recursive self-improving ai. So if you think about, again. Eric Schmidt said, the way AI is developed today is you have a bunch of scientists who collect huge amounts of data. They create the models, they create the training runs, they improve the algorithms themselves, and then this way, AI gets better over time. However, a self-improving AI can create its own data sets as we'll see in a minute, can create its own hardware, can create its own new algorithms, and then create a new version of AI that will then be able to do the same thing again, just faster and better, because it's a better model to then do this again, faster and better and so on. This is what's called, in the professional term, a fast takeoff. It's the point that you lose control because the AI can develop better and better ai, faster and faster building on the improvements of the previous model to build a better next model. And this is how we might lose control on what AI and computers can do. Almost at the same exact time, within a day or two. After the interview with Eric Schmidt, we heard an announcement from Anna Goldie and Azalea Rosini, that they have founded a new company called Recursive Intelligence. Both of these founders previously worked at Google affiliated research organizations and were deeply involved in AI driven semiconductor design, most notably the Alpha Chip initiative. What is this? It was an initiative to use AI to design better chips that these designs were actually used in chip building. And they were able to prove that AI can design better chips than humans cans while coming up with original ways to design chips that humans did not think about previously. So what are they trying to do? They're trying to build AI models to design the next variation of chips. Those chips will allow to train the next version of models faster and cheaper, which will then be able to design the next version of chips, and the cycle continues. So unlike the concepts that Eric Schmidt was talking about, which was only software, they're also talking about the hardware aspect of this. Their goal is to compress a multi-year process of semiconductor design into weeks. This means that the iterations of next variations of AI specific hardware can happen in much faster cycles, which will enable even faster development of models, and the rest is very, very clear. Based on their previous success, they were able to raise$35 million seed round led by Sequoia at an estimated of$750 million valuation. If they can figure this out, this will completely change the way computer hardware is developed right now, which may put the entire AI hardware ecosystem into a very interesting scenario. Just think about how much money is being poured right now into GPUs and building data centers and I shared with you last week in the bubble discussions that there are very big questions on what is the actual life length of these chips in an effective way? And the discussion is around five years. Some people are saying slightly more, some people are saying slightly less. But if this thing happens and the next cycle of chips can happen in, let's say three months, then the previous version becomes obsolete and then three months later and so on, which puts a very big question mark about the current business model of AI hardware. Obviously amplifies the whole recursive development question because now the hardware gets better at a much higher pace than it is right now. Another angle that adds to the mix came from open ai. So open AI's alignment and safety research has issued another paper, and that paper specifically recursive self-improving as one of the categories of concern. Now in this paper, they're not claiming that they're there yet, that they have achieved recursive self-improving AI or that it even exists today, but it is treating it as a future threshold of risk, which is a capability that should require systems to be controllable, auditable, and aligned with human values. The emphasis in the paper is not about stopping the progress of these systems, but ensuring that these systems improve in a way that humans still stay in control and have complete oversight and can stop it before it goes out of control Again, this is the first time that OpenAI names RSI, specifically in its papers as an area of concern. Another reference to this came from Jared Kaplan, who's the chief scientist officer and co-founder of Philanthropic, and in early December, he discussed recursive self-improvement in interviews on public commentary associated with philanthropics safety messaging. He also mentioned that RSI as a civilizational decision point, he emphasized that such process is inherently uncertain. You basically do not know what AI will do. Once system begins to design their successors, humans may no longer be able to fully predict the outcomes which is obviously a huge risk for humanity. So from all of these and some additional reference, we can learn a few things. First of all, these RSI systems can self-improve across multiple verticals. What is the architecture that they're built on? What is the training process and the training data that it is using? What is the hardware that is running on, which algorithm it is using, and the actual research on how to develop better system. All of these will be able to be done by ai, that they will create a better version that will be able to do the same thing better. So it's not just the algorithms, it's not just the code, it's all of the different components all combined. Now, where can this go? This can go in three different ways. One of them is human governed. RSI. Basically, humans remain tightly in the loop and can pull the plug whenever they see that it is the right time to pull the plug. The other option is partially autonomous RSI, basically systems that can handle most of the tasks on their own with human oversights only at big steps and stages. And then the last thing is fully autonomous RSI, which is very obvious. It just runs and does whatever it wants and there's obviously conflicting interests here between safety and the potential benefit of such a system, right? So if you are in the race for global dominance, letting the system run autonomously will happen significantly faster, but from a safety perspective, could lead to a complete catastrophe. And so when all these leaders all roughly at the same time start talking about the risks that this represent, it means one thing and one thing only, which is they're seeing the glimpse of this in their labs already. You need to remember that all these labs have models that are significantly more powerful than the one they're releasing that we have access to. So when we are comparing benchmarks and things of GPT 5.2 to Cloud Opus 4.5 to Gemini three Pro. All of these are the things they're releasing, what they have in their labs and that they're testing and experimenting and researching is between six to probably 18 months ahead of what they're actually releasing. And if all of them suddenly rough at the same week, started talking about these things, it means that they're seeing the glimpse of that in their labs. And this is becoming very, very real to them. And this is scary because this with the wrong process could lead to us practically losing control on how AI operates and what it does and what it can do, which could, in specific scenarios, represent an existential threat to humanity. And so. On one hand, I'm happy that everybody's raising the flag. On the other hand, I would love to see significantly tighter collaboration between labs across the world and governments across the world to specifically look into AI safety and RSI included in that, and make sure that we are all aligned to make sure that nothing catastrophic happens in the future. This might be more dangerous than atomic weapons, and we know that that was a successful international initiative. So I really hope we'll start seeing this coming together in the near future. And our last deep dive topic comes from multiple sources and several different labs and organizations released research based on actual usage of AI across different platforms, showing how people actually use ai. The first one. That we're going to discuss comes from open router. Those of you who don't know Open Router. Open router is basically an aggregator. It's a hub of more or less, every AI API out there. And all you need, you need one connection to them and just their API key. And you can consume tokens across all the different AI APIs that are out there. I've used it for multiple use cases previously. It's a very effective way to do this. They take a few percentage of revenue, basically arbitrage between the real cost of the tokens to what they're charging you. But you are getting a very simple integration to as many models as you want, including rollback and including redundancy capabilities that can roll over to other models based on what you define. So very useful tool overall. So they work together with a 16 Z to review the usage of how people are actually using these models across over a hundred trillion tokens. That was consumed in 2025. And in this paper open router together with a 16 Z has. Evaluated how models are being used across over a hundred trillion tokens that were consumed through the open router platform between November, 2024 and November, 2025, across over 300 models from 60 different providers. So this is a lot of tokens being consumed now, to put things in perspective, even though the number sounds really significant, and it is. Google Gemini shared recently that they are generating over a quadrillion, which is 1000 trillion tokens in a month. So a hundred trillion is still not a crazy amount, but it's definitely a big enough dataset to look at trends, for sure. So what are the things that they've shared? US? LED closed source model like open ai, philanthropic are still leading by a very big spread with 70% of tokens. But at the same time, open source models have been growing specifically Chinese open source models that have grown from 1.2% in late 2024 to over 30% in peak weeks in 2025, and an average of numbers in the teens for these open source models now. Also how people are using it has shifted dramatically through this year. The usage in 2025 had programming related tokens. Were only 11% at the end of 2024, and it is over 50% for all tokens by late 2025 Anthropic, Claude is leading the pack with over 60% for most of the year. Deeping just below that in November of 2025 while open source models, and again, mostly Chinese use most of the tokens for role play and gaming, or basically day-to-day conversations. What does that tell us? It tells us that when people want reliable code writing, they're willing to pay more for the closed source models, but when they don't need the same level of reliability and consistency across really large data sets, then they are going for price in which the Chinese models are delivering very good value. Another big clear trend is the usage of reasoning models That has hit more than 50% now, and if you think about it, the first reasoning model was just released by open ai. And if you think about how crazy this is, the first reasoning models OpenAI is, oh one was released in December of 2024, so less than a year ago, and now it's already being used for more than 50% of these tokens under open router. Another big difference is the growth in the length of prompts. So prompts, tokens has quadrupled in this past year to 6,000 tokens in average per prompt. Another thing that has grew dramatically is the length of sequences, basically longer back and forth conversations. Which tells you that people invest more time in writing better prompts, and that they're using it for more and more complex processes. As far as where these tokens are being consumed, north America is still number one with 47.2%, but Asia is surging and it's now 28.6%, so still way behind the North American market, but growing very, very fast. They also looked at stickiness. So how many people are switching models versus how many people are staying with the same model? And they found that it depends on the specific use cases where there were serious, sophisticated, hardcore use cases, people stuck with the same models longer. And while it was day-to-day tasks, uh, there was no stickiness almost at all. And people would switch back and forth a lot more frequently. So what can we learn from this? The first thing I wanna mention is a caveat about the specific tool. A, again, this is not a huge amount of tokens, but definitely a good statistical size to look at. But B, the people who are using open router are more technical people who are building their own applications through an API, which is definitely not your common user. So I think in the common users, we will not see more than 50% going to programming. But that being said, these findings are very interesting and they're definitely showing the trends. To me, the most interesting parameter out of all of this is the huge growth in the lengths of prompts and the length of change. It basically signals that despite all the conversation about how the models are understanding us much better and that prompt engineering is going away, it is clear that people understand that as they write better, more detailed, prompt, and know how to construct longer detailed conversations in a structured way, they are getting better results. I can very clearly see that on myself, especially since I started voice typing, most of my interactions with my computer and a lot of it is AI related. My prompts became significantly longer, significantly more nuanced and details and I, and while I follow specific frameworks as far as the structure of the prompt, I'm providing each of the components of the framework with more and more details, making it a lot more nuanced, and I'm getting much better results Now, speaking of knowing how to build prompts better and getting better results. If you want to learn how to do this properly, if you want to learn the art and the science of prompting, if you wanna know how to use AI for data analysis and research, if you wanna know how to create content, both visual, written and video content with ai, basically, if you wanna know the fundamentals on how to use AI effectively for business usage, our next cohort of our highly successful AI business transformation course starts at the third week of January. If you have not yet invested in more structured AI education for either yourself or people in your team or your company, you owe yourself to do this in the beginning of 2026 because the gap, the chasm between the people who know how to use AI effectively and do not know how to use AI effectively is growing every single day and have significant impact on careers and futures of companies. So if you're interested, there's a link in the show notes. You can click on that and learn everything about the course. I started teaching this course in April of 2023. So it's been two and a half years of evolvement. I've been upgrading this course every single month and I've taught it to some of the largest companies and enterprises in the world. And then about once a quarter, I open it to the public. So don't wait. If you need AI training and you do, if you haven't done this so far, come and join us at the end of January. Now back to release of interesting findings of how people are using ai. Perplexity has released a study together with Harvard and they've looked together at hundreds of millions of anonymized Comet and Comet assistant interactions. So those of you who don't know Comet come is their agent browser, and Comet assistant is the agentic aspect that runs within the browser. And then we're trying to analyze how people are using Comet for what purposes and how it evolves over time. And they shared some really interesting findings. First of all, 57% of agent activity focuses on cognitive tasks, 36% on productivity workflows and 21% on learning and research. They also saw a very clear partnership between the users and the agents where the humans delegate and expand the capabilities to gather and synthesize information while allowing the humans to make the final call, which is more or less how I use AI for more or less everything. Another very interesting thing that they found is that when people get started, they start with very simple, low stake, fun stuff like travel and trivia. But then as time goes by, people go to more and more high power, high quality, how utility workflows that actually provide them real value. They have broken the overall usage to six different categories. The largest one is productivity and workflow at 36.2%. This category includes document and form, editing, account management, email management, spreadsheet and data editing, computer programming, uh, investing in banking, and a few others. The second largest category is learning and research at 20.8% of overall usage with courses at 69% and research at 37%. The next category was media and entertainment at 20.8%. Roughly the same level as learning and research. And over there they have social media and messaging, movies, TV, videos, online games, music and podcasts, and some other smaller categories. The next one was shopping and commerce at 10.9% with buying goods at 89% and services at 10.3%. And then the two smaller categories were travel and leisure at 7.1% and job and career at another 7.1%. As I mentioned. The other graph that was very interesting to me in that research talks about how people change their behavior between their first queries and the average of all queries. So on first queries, basically the first time people use this media entertainment was number one by a big spread followed by travel and leisure. And then shopping and commerce. But if you look at the overall queries, number one is productivity and workflows, and number two is learning and research, which tells you that as people start to understand how they can use agentic browsers for actual real work, this is the direction that they're going to take, which gives us a hint on how the future of Agentic universe looks like with agents helping us basically across the board, but with a very clear focus on productivity on business life. I've been using Comet and Atlas for a while now, and I can say that I use them mostly for building automations and helping me troubleshoot code when I need to, and it's actually very effective at doing that. It is saving you the copying and pasting into a chat with a ChatGPT or Clot or Gemini. And it has all the context of the specific websites and the flows because you can see the browser and it can research, uh, things that are going on and look at different components that you don't necessarily give it in your screenshots. So I find it to be very, very effective in these kind of use cases. The same thing can obviously be expanded to any other process that you are doing online and you need somebody to hold your hand or help you because you're not an expert. And so I highly recommend you try that as well. Microsoft research shared their dive into 37 and a half million anonymized copilot chats showing how people are using copilots in the wild. And what I mean in the wild, this research looked only at users that are not in the education or enterprise licenses. So basically the open to the public version of Microsoft copilot. And they're sharing some very interesting findings. First of all, mobile users have used copilot. A lot more for health queries around the clock. While desktop users focused a lot more on business related tasks between nine to five, which makes perfect sense. Programming related prompts has peaked on weekdays while gaming and exploration has peaked on weekends again, makes perfect sense. But the other thing that was interesting when it comes to weekends is that a lot of people went to copilot for philosophical questions, including as they quoted existential clarity. And these kind of conversations happened a lot more after dark in the late night hours where people start to wonder and have AI help them think about their lives, the future of the world, and so on. More philosophical question. So what does these pattern basically tell us? They tell us that AI is getting embedded across the board to everything that we do. This is what it's actually showing, right? So this is from our daily lives to our psychological wellbeing, to business use cases and day-to-day requirements. Which is telling you that AI is becoming mainstream, right? It's no longer just a geeky thing, but everybody's using it for more or less everything across the board. Now, while this is interesting and it's. Cool findings. You need to remember that the general public is using copilot, almost none whatsoever. So just over 3% of global chatbot share belongs to copilot compared to close to 80% of chat GPT as an example, another interesting research that comes to how AI is actually failing on specific things came from Databricks, uh, this week when they released office qa, which is a new benchmark that is looking through 89,000 pages of US government information as the dataset to see how well AI can actually find, identify, and synthesize information. And GPT five barely got 43% accuracy on document heavy tasks. And that was actually even better than Claude Opus 4.5 agents that hit only 37.4% on raw PDFs at large scale. Which basically comes to tell you that if you just unleash AI on a large set of documents that you currently have in your company, you should expect less than 50% accuracy, which is obviously unacceptable. Now, the reason obviously Databricks share this is because they are selling a service to make that better. So they have a service that is pre-processing of documents that basically parse the documents into a structure that makes it easier for AI to analyze and using this process. Claude Opus jumps to 67.8%, so over 30% growth. GPT 5.1 climbs to 52%, a 9.3% jump. And a lot of it has to do with the structure of the documents, such as removing nested headers and merged sales in Excels and so on. They also shared that agent failed pretty bad on visual charts and came up with plausible but wrong answers In many of these cases, and as I mentioned that most of these models plateau at around 40% on tough multi-step problems, which basically signals the need beyond just OCR parsing to get real relevant results. I agree with that 100%. I shared with you on several different episodes that I also have a software company. That software company developed a really amazing product that knows how to do invoice vouching and reconciliation automatically connected straight into your ERP or accounting system. Now. A potential client asked me in a meeting this week, what is the system doing differently than just traditional OCR of scanning and getting the invoices into a much higher level accuracy that our system does compared to traditional systems? And the answer was exactly this. It's agents that look at the context and the understanding of what the process and what might the outcome needs to be way beyond just the basic OCR. Great examples are discounts and refunds. They come in different shapes and different sizes, so the OCR itself can pick a line, but what does it actually mean? How does it apply in your actual system? How should it be represented or net, NetSuite or your accounting system? All these things are aspect that if you don't build a system around it, actual agents don't understand context and understand what the OCR may mean. You might get very wrong answers. Which basically tells us that real life is a lot more complex and nuanced than just a benchmark. Hence why in many cases you are trying a new process and you do a quick test of with a relatively small sample size and it works really well. And then when you actually try to run it at a large scale company wide usage, it fails miserably. And it's because of that, it's because real life is a lot more nuanced with much more edge cases than we tend to think. And without being able to train the systems properly and teach them how to handle these cases with a lot more context, AI is just not there yet. Another great example of this, actually, you're gonna hear. In an episode that's coming in the next few weeks with Nate Amon, who helps large enterprises implement AI effectively, and he was talking about how investing in the structure of your documents and building them correctly will dramatically improve their results of AI when using these documents. He said something that is really profound that I never thought about before, that we need to start creating documents that will be easy for AI to read, that humans can also read versus writing documents for humans and hoping AI will be able to read them effectively. Now the big question is obviously what do you do with the tens of millions or even more than that documents that enterprises already have and that already existing? And for that, I'm sure there's gonna be different optimization processes such as the one that Databricks is offering. I also assume that AI will get better at reading them to be. The best example is Claude's latest ability to read significantly more complex and less structured spreadsheets. That literally blew my mind when they came up with a recent version that does that. So I think the AI will get better at reading these together with some kind of an optimization mechanism, will allow us to make a better use of documents with ai. But in the last research that we're gonna share this week, we're going to learn even something more interesting that the AI itself is just half the problem. The other half is how we humans perceive AI work. SAP just did a really interesting experiment where they gave five different SAP consultant teams information that was presumably generated by junior interns. Four out of the teams held the analysis as impressive and validated as high quality. The fifth team was told that their AI platform, Juul has generated the research and this team dismissed most of the finding. Went back to, to double check almost every aspect of the work. So the same exact work was given to five different teams. Four did not know it was ai, and they praised the results and one was told it was AI and said it was pure junk and had to go and redo the work. What we learned from that is the current perception of AI is highly negative in that people has serious deep fears on what AI produces. Now the question is do I think it is justified or not? And I think there are cases where it is justified and there are cases where it is not justified. I'll give you two quick examples. If it is mission critical to get a hundred percent accuracy, like doing a financial report, then having. Second thoughts about the accuracy of AI is justified. However, if you are using AI for online research and you're asking it to provide clear citations and you're asking it for links to where the citations are taken from, and you're asking it to tell you exactly what was the quote from each and every one of these sources, and you're looking for a trend or a broad brush direction for a specific topic, it is definitely good enough and it is most likely going to be better than what you can do on your own. Now the other thing that they shared about Juul, again, which is their own house grown AI agent, is that consultants who are learning how to properly prompt it are getting significantly better results and better outputs compared to the ones who just give it very simple prompts, which takes us back to learning how to prompt properly, which takes us back to training. You need training for yourself, for your company, for your team, uh, in order to learn how to effectively use AI in a much better way. Last week I've done a workshop for a large enterprise teaching their senior salespeople and their CBEs how to properly use AI to research and respond to government style large scale RFPs. And it has blown their mind on what is possible that they did not know because they did not know. And now that they do know, I was able to show them how to more effectively query documents, do client research, people research, and come up with the hypothesis on how to properly, uh, respond to the RFP, how to analyze the requirements down to the requirement level compared to your competitors and so on. All of that is possible if you know how to use AI effectively. And they did all of that manually so far. Staying on the topic of using AI at the enterprise level. Philanthropic just released a really interesting feature that connects Claude to Slack. So what exactly does it do you? All you need to do is to install the Claude Slack app from the marketplace. Authorize your Claude Code account to be connected to it. Ensure that it has web access to Claude Code on the web, and then it does the following magic. It can look at the channels that you're giving it access to and look for all the context, including everything that was talked about in that Slack channel, including coding examples, including bugs, including features. And if it has access to your code base, it can spin up several different instances of cloud code on the web and pull threads and channels context and build codes and create the right repos to. Solve bugs, create new code and so on while reporting back to the Slack channel. So it basically acts like an independent code writer in the team that has full context of everything that happened in an entire thread and can autonomously write code and fix bugs on its own while reporting back what it is doing. Is this scary as hell? Is this, can this become one of the most effective tools of code development teams? 100%. Because a lot of the effort is happening in Slack right now. A lot of the coordination is happening in Slack right now. And if you can tailor exactly what Cloud code can and cannot do in that environment, such as solve these kind of bugs, but don't touch these kind of infrastructure things, it can go autonomously and do these tasks while reporting back so somebody in the code review can check these changes. As somebody who has been using Cloud code excessively this past week, I can tell you that Claude code on Opus 4.5 is incredible and also as somebody who is the CEO of a software company, will I allow this to run in my company? At this stage, probably not as a testing on specific things. Most likely, yes. But I think this will evolve over time and I think the reliability and then later on, the fear will change dramatically. And this will become common in software companies sometime in the next 12 to 18 months. And now to a few rapid fire items. The first one is Time Magazine just named the person of the Year. It's something they've been doing every single year for the last a hundred years or so, since 1927. And this year, the person of the year is not just a person, it's persons, and they've named the leading minds behind the AI revolution that we are facing as their persons of the year. This include Mark Zuckerberg, Lisa Sue, who's the CEO of a MD, Elon Musk, Jensen Huang, Sam Altman, Demi Saba, the CEO, and the founder of DeepMind, Dario Amede, the CEO of Anthropic, and FEI Lee the. Founder of World Labs and one of the founder of World Labs that we talked about a few weeks ago, and they chose them. And now I'm quoting for delivering the age of thinking machines for wowing and warring humanity, for transforming the present and transcending the possible. The architects of AI are times 20, 25% of the year. Interestingly, out of this group, there are five billionaires with a collective fortune of over$870 billion. I think the more important aspect that these people are basically deciding the future of humanity every single day. The way this world is going to look like in the next decade and beyond is gonna be dramatically different than everything we've known so far because of the work that these people are pushing forward. But to me, the fact that Time Magazine chose these people as the persons of the year is connected directly to what we discussed earlier when we talked about how people using ai, and that is the fact is that AI became from something that geeks like Me used in 23 and 24 to something everybody is using from grandparents to grandchildren across more or less every task in every aspect, and in every place around the world. By the way, from a cool aspect perspective, it is worth looking at the cover of Time Magazine. They actually have two covers, but one of them mimics the iconic 1932 photograph of lunch atop skyscraper, uh, by Charles c EBTs, which is one of the most famous pictures maybe ever taken that shows, 11 construction workers casually sitting on a steel beam that is hanging in the air above the skyline of New York City. So they took the same concept and created an illustration of these people sitting on the same kind of thing, and it is. I love the connection, to the past and to showing this is how the future is being built. This was how the future was built. Built in 1932, and the new future is being built by AI and for ai. Our next rapid fire item talks about the first time the Fed is actually admitting that aI plays a role in the cooling labor market. Federal Reserve chair, Jerome Powell. Declared that artificial intelligence is part of the story. That's the exact quote behind the worsening job market. And he shared that during a December 9th economic forum, and that was reported at the information. The specific numbers he shared is that the unemployment in the US is now at 4.2% with non-farm payrolls adding only 178,000 jobs, which was below expectations and sectors like tech and finance, showing hiring freezes because of AI automation. Powell is also talking about the benefits of ai and he was crediting, 2.8% productivity surge in Q3 because of ai. Not sure exactly how they measured or calculated that, but he is definitely warning of displacement risks because of AI in white collar jobs based on the analysis that they did. If you have been listening to this podcast in the last few months, you know that I have very serious concerns for the job market in general and definitely in white collar jobs in the next couple of years. And you also know that in entry level jobs, this is becoming a much, much, much bigger problem because AI is much better at automating entry level jobs that are just simpler to automate. Connected to that is a research that OpenAI just shared, and in an internal study, they found that AI saved 50 minutes per day for workers using different AI tools, including ChatGPT. And they did that by studying over 5,000 participants in different roles such as software engineering and marketing. And that was shared by Bloomberg on December 8th. The biggest time savings came in tasks of research. That's very obvious. I now use deep research at least once a day, sometimes in multiple times a day across different platforms. and that obviously saves me hours of research. and the lowest actually in the research was coding with saving only 42 minutes per day. But overall, 82% of users reported faster completion of tasks with AI handling, elements such as drafting emails, debugging snippets, doing research, and so on. Now, do I think the 57 minutes per day per person is a realistic number? I don't, but I definitely can tell you that from my own experience working with myself and with multiple companies as a consultant, as in doing workshops for companies, there are specific areas in which it is saving a lot more than that. And there are areas that are driving significantly less savings. And hence, I think it's very specific to the role and the specific task you are working on. But you can get very significant savings. And overall these are going to grow into longer periods of time and more aspects of the business. I mentioned earlier in the episode the Exodus that is happening from Apple right now. We talked multiple times in this podcast about how Apple is failing with all of its AI initiatives in the past few years. dozens of AI scientists and researchers have been leaving Apple in the past few months. Most of them are going to meta and open ai, but they're also going to other locations. And the roles of the people that are departing spans from researchers to design engineers to audio tech, watch, interfaces, robotics. Combine that with some of the veterans that have been running AI at Apple for a very long time, including their machine learning chief, and you understand that Apple is in serious trouble when it comes to ai. Uh, we spoke in this podcast several times in the past that they should have bought somebody. There were conversations about potentially buying perplexity that didn't happen. There were very vague rumors about buying anthropic, which is probably what they should have done, uh, while it was still much cheaper than it is right now. But they are in serious trouble when it comes to their AI initiatives. And with the current term oil and all these really smart people with experience leaving them, it doesn't look very good for Apple. And the next topic is actually interesting, and it's coming from an interesting direction, and it's coming from Gartner. Gartner has issued a recommendation for enterprises and companies to immediately block all AI browsers. I'm quoting for the foreseeable future, and the reason they're saying that is they're saying there's overwhelming cybersecurity threats that they're detailing in a report called Cybersecurity must Block AI browsers. For now, that's literally the name of the report. They're claiming that tools like Perplexities Comments and Chachi Atlas have two risky aspects. They're claiming that the biggest risk is that these autonomous features could wreck havoc by exploiting authenticated environments that the users will have access to because they have their credentials saved in the browser, as an example, basically enabling unlimited data leaks and malicious actions to these otherwise protected environments. Now in our Friday AI hangouts that happened yesterday. And those of you, by the way, who want to join us for these Friday Hangouts, you're more than welcome to join. There's a link in the show notes to come and join us. it's an open mic kind of environment that we talk about AI risks and we talk about AI implementation and we review specific tools. It is just a great community of people who are implementing AI and sharing what they're learning. So we talked about this topic yesterday on how safe it is to use these tools. And my answer is, is it depends how you use them. If you use them for things that are not your core business and you do not give them access to your bank account or to your accounting system and so on, and you keep using them for just very specific tasks, they are safe and provide exceptional benefits and you just gotta be smart about what you allow them and do not allow them access to, which can definitely be done. If you do deploy them organization wide, then you have an issue and that issue comes with training and data security issues that you gotta deal with before you deploy them company wide. That's it for this week. There are a lot of other really interesting stories that we just don't have time for and you can read about all of them in our weekly newsletter. It includes the links to all the articles we did cover and to all the articles we did not cover. And I believe this week there are more articles we did not cover than the ones we did cover, including Nadela. Admitting that Microsoft has a massive disadvantage in the AI race, including. Adobe's new announcements of infusing ChatGPT with Photoshop, including how a company is using cloud code to run over a thousand machine learning experiments every single day, and including another lawsuit against ChatGPT for a family that Sues OpenAI and Microsoft over a murder suicide case that was fueled by ChatGPT and a lot more. So if you wanna learn more about other topics, just sign up for our newsletter. It's also gonna tell you exactly how we can join our events, uh, that are free and that provide access to really amazing experts and a lot of other stuff. So, it's a great way to compliment what we do here on the podcast. That is it for this week. If you are enjoying this podcast, please like us and give us a review on Apple Podcast and or Spotify. And while you're at it, click on the share button and share this podcast with other people that you know that will benefit from it. I'm sure you know more than a few people that will benefit from this podcast as much as you do, and all you have to do is click the share button, invest. 10 seconds in helping those people and helping us, and I will be really grateful if you do that. Don't forget to check out our course. It can literally change your life and it happens at the end of January. So go on and sign up. They always get sold out before the deadline, so if you want to be a part of that course, go and sign up to it right now. We'll be back on Tuesday with an amazing episode that will show you how to use cloud code to build amazing agents without knowing how to code or even knowing what code is, and create incredibly powerful agents that you can apply through more or less everything in your business and in your life. Keep exploring ai, keep sharing with us and with others what you learn. And until next time, have an amazing rest of your weekend.