Leveraging AI

223 | Money flood - insane revenue and valuation growth, AI impacting every industry, Open AI and Microsoft deal, new time compute records are changing the game, the first AI government member, and more important AI news for the week ending on Sept 12, 20

Isar Meitis Season 1 Episode 223

Money flood - insane revenue and valuation growth, AI impacting every industry, Open AI and Microsoft deal, new time compute records are changing the game, the first AI government member, and more important AI news for the week ending on September 12 2025

Is AI on the verge of world domination… or an economic meltdown?

This week’s AI headlines weren’t about shiny new model releases and that’s a good thing. It gave us time to zoom out and examine the billion-dollar chess game shaping our future.

From OpenAI’s $115B spend-fest to the first AI government cabinet member, and from Replit’s code-writing agents to copyright lawsuits with a twist — this episode is a crash course in just how *wild* and *wide* AI's reach has become.

Here’s your witty but grounded executive summary of the week’s most impactful AI news — handpicked and broken down by your host, Isar Meitis, with direct implications for how business leaders should think, adapt, and move.

In this session, you’ll discover:
- OpenAI’s capital-intensive moonshot and why it may still not be profitable in 2030
- Microsoft’s unexpected pivot: From exclusive OpenAI integration to paying AWS for Claude
- The first AI cabinet member in Albania here’s why it might be brilliant (or backfire)
- AI-made movies & TV are no longer a fantasy, OpenAI is backing a full-length feature
- Funding frenzy decoded: Databricks, Replit, Perplexity, and others are raising billions
- "Thinking" AI that works for hours: How new models are pushing past past limitations
- 5,000 AI podcasts a week for \$1 each?! The scary-fascinating rise of mass-produced audio
- FTC probes AI’s influence on kids and what it means for regulation & trust
- AI-powered AR glasses from Amazon — coming to delivery drivers and consumers near you
- Duke gives GPT-4o to all students what this means for the future of higher education
- Why Apple is strangely silent on AI this year, and what it could cost them

Google Cloud AI Agent Handbook (PDF) - https://services.google.com/fh/files/misc/ai_agents_handbook.pdf


About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker 2:

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and we had another week in which there were no big launches of new models or no big announcements from the big labs, but there are a thousand other things to talk about, and I actually like it when we actually have. Of time to talk about the bigger picture. So today is gonna be a lot about the bigger picture, and we are going to talk a lot about two main aspects of ai. One is multiple examples of how broad the impact of AI is going to be on our lives. We're going to cover multiple startups and companies and developments and show how many areas, fields, industries, and so on they're involved in. So that's gonna be one big topic that we're going to talk about. The other big topic is going to talk about the crazy amount of money that is being poured into AI right now with new funding grounds from several different large companies, new revenue numbers from multiple different companies. So that is also going to show us how fast and how enormous this industry is growing. We're going to talk about China and we are going to talk about the first AI government cabinet member in the world. So that's gonna come all the way in the end, but we have a lot to talk about. So let's get started. Before we jump into all the topics I mentioned earlier, the one big discussion this week that we have to start with is open AI projections on how much money they're going to spend in the next few years. So open AI are projecting, they will spend$115 billion between now and 2029. Now, the previous projections that they had just earlier this year was 80 billion, which is already a crazy amount of money that is unparalleled with anything we've seen in history. So it's now almost one and a half times that amount in their new projections, fueled by several different aspects, which we're going to detail now. To be fair, there's also a huge boom in their revenue. So OpenAI is currently projecting$13 billion of revenue in 2025. That's three and a half times last year's revenue, and 300 million above their recent projections. They're also projecting that the revenue from CHA alone will grow to 90 billion by 2030, which is a 40% jump from their previous outlook that they shared again just a couple of months ago. Going back to the spending side, OpenAI is projecting to burn through$8 billion in cash in 2025. This is 1.5 billion above their Q1 forecast. So just two quarters ago they were projecting this is going to be six and a half billion. Their projections are right now 8 billion. They're planning that number to be$17 billion in 20 26, 30 5 billion in 2027 and 45 billion in 2028. That's four x their previous estimate for 2028 numbers. So what's the breakdown of that insane amount of money? Well, training new models will cost 9 billion this year, up from 2 billion last year. They're planning next year, so 2026 to spend 19 billion on training new models while inference, which is using the GPUs in order to generate the AI that we use. So the consumption of AI is going to hit 16 billion in 2025, totaling 150 billion between now and 2030. Combine that with the insane amount of money that OpenAI is planning to invest in data centers. Which in their current projections is going to come close to a hundred billion dollars more later in this decade on own servers. So compute that they will own that is going to get to a hundred billion dollars in value. This is per CFO Sarah Fryer. Now part of the jump in these expenses obviously come from the talent war. So employee salaries jumped to 700 million last year. Now coming close to 1.5 billion in 2025. So basically more than doubling now. Yes, they grew the workforce, but a lot of it comes from just much bigger compensation packages. A big part of it was obviously meta's poaching attempts, but it's not just them. There's a fierce competition across all the main labs that are trying to tempt leading researchers jump ship from one company to the other. Now the company has already raised over$38 billion, including all the rounds they've done so far, including the backing from Microsoft in the beginning and obviously the other infusions that they received through the years with recently SoftBank, providing a lot of cash with the amount of rest of the cash that SoftBank has already committed. This number will come to 60 billion. the 38 billion they raised so far, leaves them with only$7.6 billion of cash, now, another interesting aspect on the revenue side is that OpenAI is planning to monetize the free users in exciting ways. So they have a lot more free users than paid users. They are projecting$110 billion between 2026 and 2030 via non-subscription perks like shopping affiliates or ads. Through these new mechanisms that are not introduced yet, they are planning to make two to$15 annually per free user at a 80 to 85% margin by 2030. And by then they're expecting to have 2 billion weekly active users. And so combine these two numbers together and you understand that they're sitting on a cash machine, even from the free users that are not paying them directly. So what is the bottom line of all of this? Well, Sam Altman in a statement to his employees, said that their company might be The most capital intensive startup of all times, and I tend to agree. I don't think we ever had anything coming close to these numbers. But the other very interesting thing is, despite the explosive growth of OpenAI and the explosive growth they're projecting for the future, which is even bigger than what we've seen so far. Their free cash flow projected for 2030 is on a very slim margin. Meaning despite the fact that they are going to be generating tens of billions and potentially hundreds of billions of dollars between now and then, they may not be profitable at in 2030, which means that the crazy amounts of money that they're raising might be at risk unless they can keep on raising these kind of funds, uh, moving on in the future, which, if they are growing at the space is probably likely, but what happens to the global economy between now and then? We'll have a huge impact on that as well. So this is definitely the largest investment experiment of potentially all times. Now, as you remember, some of the funds that were inked for them in the soft pack deal are pending and depending on them switching their model from a nonprofit to a for-profit organization. And one of the things that was putting that at risk is their inability to get to an agreement with Microsoft on how their partnership looks like moving forward. Because a lot of the details were not clear, you know, in the early days, Microsoft were the only backer and there were different deals signed. And now it's a very different situation. And these conversations have been going on for about a year since OpenAI suggested that they're going to switch to a for-profit model. Well, finally, OpenAI and Microsoft has reached a preliminary agreement to revise their multi-billion dollar partnership and they just signed a non-binding memorandum of understanding, also known as an MOU. That is defining their new phase of their partnership. Now, both companies did not provide any details on how they're gonna look like, but they both said that they really want to finalize all the details and get the final agreement signed. The actual quote is together we remain focused on delivering the best AI tools for everyone grounded in our shared commitment to safety. That's basically what they said so far. So it seems to be moving forward at least on that front. That being said, there are multiple bodies that have had discussions with the Attorney General of California and Delaware trying to prevent that restructuring, from happening. So there's still hurdles to go through. I mentioned time and time again before, I think there's way too much money involved in this not to be successful. And there's also the current government that is definitely pro business and less for regulation. And so I assume, I don't know that obviously, but I assume eventually they will be able to make this transformation and keep moving forward to spend this crazy amount of money in the next few years and generate this crazy amount of money in the next few years. But potentially as part of this agreement, even though it was not officially tied together, Microsoft made another very interesting announcement this week in which Office 365 copilot will now also integrate Anthropics Claude Models alongside open AI's ChatGPT models, which is the only thing that was backing it so far. So the statement said that Microsoft found Anthropicss clause, sonnet four outperform open AI GPT five in tasks like automating financial functions in Excel and generating more aesthetically pleasing PowerPoint presentations. This is per the quote from the information. Now to make this even more interesting, Microsoft will pay Amazon Web Services AWS to access cloud models. So right now, all the chat GT models that are running in the backend of copilot are hosted on Microsoft Azure, meaning they're running it at cost. They're not paying anybody else to do this. But now to integrate Claudes models into the mix as well, they will have an additional cost despite that co-pilot pricing currently, as of right now, stays the same at$30 per month per user. Now per Microsoft, over a hundred million customers currently use at least one copilot product with Office 365 Copilot is estimated to generate over a billion dollars in annual revenue and to explain how big the future potential is, this is currently 1% that is using these model out of the 430 million paying users that Microsoft has overall. So the opportunity for growth is insane. Now, this is not the first time Microsoft works with Anthropics. Microsoft has integrated Claude's models into GitHub copilot in the past, and it's still running in the backend. So it's not their first partnership, but it is a very interesting move by Microsoft to basically say, we are going to deliver the best models we can in the backend of copilot, and it's not gonna be necessarily a solo show by OpenAI. I find this as a very interesting move by Microsoft. It makes perfect sense. It makes the same amount of sense as OpenAI using other providers for their GPUs beyond just Azure, which has been happening for a while now. So this becoming a less committing partnership between these two parties. And again, I think for the sake of both of them, it will be good if they finalize the terms of this agreement. Now speaking of Anthropic, they just agreed to pay$1.5 billion settlement in a class action lawsuit over ping nearly half a million books to train the Claude models. So if you remember, we shared with you in June that a US district judge named asla has ruled that Anthropics AI training of copyrighted books is exceedingly transformative and protected under fair use doctrine. Basically, he was saying, and I'm quoting like any reader, aspiring to be a writer. Anthropicss, LLMs, train upon works not to Replitcate or supplant them, but to turn a hard corner and create something different, which was a incredibly important for the AI fair use claims. However, that was related to the books they actually purchased. He was very critic of the fact that they have downloaded and pirated between 465,000 and 500,000 books from shadow libraries like Library Genesis, and other websites that provide access to pirated books. So this new settlement is supposed to offer about$3,000 per work to writers. Of these half a million books, this is four times the potential statuary damages and 15 times of the Innocent Infringement Award. So it is a very significant penalty to Anthropics that, as I mentioned, they agreed to pay. That being said, the judge himself said that this deal is, and I'm quoting full of pitfalls, and he set the deadline of September 15th to get a final drop dead list of all the books that were pirated, and he's claiming he's gonna finalize his review on September 22nd. Following by saying, we'll see if I can hold my nose and approve it. Basically, he's not happy with the settlement, but he's saying that he's most likely will try to avoid the stench, in other words, and make it move forward. I think both aspects of this ruling are very important. On one hand, he is clearly stating for the first time that training AI models on content that is out there is considered fair use, which is a huge win for the AI labs. On the other hand, he's saying you have to have legal access to that content, otherwise it's piracy and you will pay hefty fines. And I must admit, I'm happy with both sides of that equation. How exactly that going to evolve as far as getting access to internet content that is already out there and is open to the public is still early to know. We know that there's been many licensing deals signed between the leading labs and many large news and content providers, so that might be a path forward for the bigger players. But what happens to people like me who generate content every single day? I don't think there's a compensation mechanism for that yet. I don't know if there will be, but at least it's a step in the right direction. This ruling has got mixed reviews and feedback from people in the Association of American publishers and the authors guild. So on one hand they are happy with the fact that they're getting compensation. On the other hand, they're not happy with all the detailS. But I guess that's true in every settlement. Not everybody's gonna be a hundred percent happy, but everybody can live with the solution and move forward, which I think this is what's gonna happen in this particular case. Now, speaking of government involvement with ai, the Federal Trade Commission, the FTC has launched a probe into the leading AI platforms and how they are providing access or protecting kids and young individuals from the usage of ai. So, according to the FTCs. Press release. The agency is issuing orders to seven major companies, alphabet, Inc. Character Technologies, Instagram, LLC, meta Platforms, Inc. OpenAI, snap, and Xai Corporation to provide detailed information on their AI chatbot development, monitoring, and risk mitigation for young users. Now, their goal is to basically understand how these chatbots simulate human-like communication and build relationships and understanding how that is going to be monetized in order to verify that user engagement and specifically younger user engagement, teenagers and children. And how is that going to potentially impact their development and what risks does it generate for these users? The FTC Chairman, Andrew Ferguson emphasized, and I'm quoting, protecting Kids online, is a top priority of the Trump Vans administration, so fostering innovation in critical sectors of our economy. Basically they're saying while this administration is go, go, go on ai, they want to also make sure they keep kids and children safe while we're developing this technology. And I'm very excited to hear both. I am really happy that they're doing this right now. We shared with you a lot about the recent steps that OpenAI is taking in that direction because of the lawsuits that they're facing and the loss of life that they have potentially or allegedly has led to. And so as I mentioned, as somebody who has three kids, I am really excited to see that bigger bodies and groups, including government he's looking for ways to mitigate the risk that this generates towards children. I think as a society, we've done a horrific job when it comes to social media impact on young individuals, and I really hope that we're going to learn from that and prevent or at least reduce the risks of AI amplifying the damages that social media has already done. So now let's talk about what areas of our lives and what kind of new innovations and what aspects of modern society are impacted by ai. But we're actually going to start with the one place where it's not happening yet, which is Apple's September, 2025 iPhone 17 event, which was a huge event like Apple does every single time. And there was one thing that was very clearly absent, or it wasn't completely absent, but it definitely was very far from center stage, which is apple intelligence. So if you remember last year's event, iPhone 16 was all about Apple intelligence. The phones and the AirPods and everything else took second stage. And the main stage was all about Apple intelligence. And this year it was definitely not the case after the huge disappointments time and time again and not being able to release any or almost any of the things they promised. The only thing they were able to release are minor things that are more toys than actual beneficial. So, you know, being able to create emojis and stuff like that. But nothing significant when it comes to real AI value on the phones. So the event itself was much shorter than usual. It was only an hour and 15 minutes. And Apple intelligence. Took a very small component of the overall event. Yes, they have released a new iPhone, a new AirPods, a new Apple watch, and they shared about their new advancements in Apple, silicon and hardware and software and all of that. But there were definitely a lot of crickets when it comes to Apple intelligence. They have shared some updates in Apple Intelligence, but they're minor like translations in iMessages and FaceTime. But these are not the big things that they were promising, and these are far behind what Google has already released with Pixel 10 and what Samsung is expected to release in their next offering. While we have been sharing all those developments on this podcast, it is very clear that Apple is not where it needs to be when it comes to delivering AI to its users. Now, surprisingly, despite all of that, their stock is not taking a hit at least yet. I'm must admit, I'm personally surprised. I think Apple will have to make some significant moves. And as we shared last week, those moves might come from partnerships Google and potentially other vendors to actually drive Apple intelligence, at least in the near future. But now let's start talking about where AI is impacting. So first of all, there was a very interesting blog post by Anderson Horowitz, this past week, a 16 Z, one of the largest vC funds in the world, they know one or two things about what's happening and they released a very interesting paper that they're calling the Great Expansion and New era in consumer software. They share that the amount of companies that are reaching a hundred million plus in a RR in two years is nothing like we ever seen before. And the interesting thing is that they're saying that the great expansion comes through usage based billing and consumer to enterprise transition, which are two things which were not common before the AI era. So pre ai, most consumer software relied on one of two mechanisms. Either ad based revenue such as Instagram, TikTok, Google, et cetera, or a flat fee subscription. Like most SaaS that we know today, and with the top companies retaining 30 to 40% of revenue after the first year, so many SaaS companies had a significant high churn rates that had to be replaced consistently in order to keep growing the company. And what they're saying is that there is a more than a hundred percent retention, which I'll explain in a minute how that works when it comes to using AI tools. And the reason there's more than a hundred percent retention is many users that have used AI on the personal level and continue to use it on a personal level, bring it to their work as well, which now means you have another license of the same person using AI for work. So that adoption between the personal use and the enterprise use is actually giving companies a higher than a hundred percent retention, which again, compared to traditional sas, which was less than 50% on average. They're also mentioning the tiered approach of between, you know,$20 to$250, depending on which platform you're on and what kind of licensing you're on. So. That tiered approach enables companies to charge people the right amount of money for the amount of tokens that they're consuming, and the people who are consuming more than that have to continue paying through just subscription based to the API. And so the usage based billing allows these companies to continue to grow as the usage grows and not defined by a flat fee per user. The other interesting aspect, as I mentioned, is the fact that most of these companies has grown their initial growth through consumer based products that then led to B2B revenue, which always takes longer to develop compared to traditional SaaS, which had to start with B2B. Like there's very few companies who developed a B2C product who then evolved to be a B2B companY. and this is happening with ai. So a lot of the things that we took for granted through the SaaS era are now being challenged and developed in completely new ways that were just not in existence, definitely not at this scale before. So the way we do business with software is changing and evolving just in front of our eyes. So the first aspect we covered is how we actually do business with software today. The next one is education or higher education. So Duke University just announced that all their undergraduates plus staff, faculty, and professional school students gained free unlimited access to GPT-4 oh starting on June 2nd, 2025. And they're running a pilot together with open AI together with. A tool that they're called Duke, GPT, which is a secure university managed AI interface that is prioritizing privacy and integrated resources. And all of those are combined into a university wide attempt to understand how to integrate AI into the higher education process. Now, they announced this in May of 2023 when the goal is by the end of the fall semester to do a summary and understand how this has worked. What are the pros, what are the cones, what are the issues that they tackled, and how are they planning to move this forward? And there are mixed feedbacks from different professors. Some of them are embracing it, like David Carlson, the associate professor of Civil Environmental Engineering that allows AI in his machine learning course if students disclose usage. And he stated, you take credit for all of Chu's mistakes and you can use it to support whatever you do. Basically, you're free to use ai, but if you get this wrong, you're gonna fail the course. And it's your fault even if the AI is the one that did it. And I think that is the right approach because I think that is the way the business world is going to operate and the whole point of these. Of your higher education, unless you're staying in the research universe, is to prepare you for the real world and the business world. And I think this is a very healthy approach. There were other professors that are still banning it, especially in the areas of humanities, arguing that it completely erode. The value of developing independent thinking. And one of the arguments that one of the professors used is, if you want to be a good athlete, you would surely not try to have someone else do the workout for you, which makes perfect sense. So she's saying that using ChatGPT or any other tool if you are trying to learn how to write, is exactly like allowing somebody else to run for you if you're trying to develop as an athlete. And I must admit what you're saying makes sense. That being said, I think teaching people how to collaborate with AI to develop ideas and improve their text is probably the right approach, but definitely still a mixed approach in academia. I do think that academia is gonna get challenged as AI gets more and more adopted, both by individuals as they're coming into academia, as well as by the output that they're expected to deliver. Again, people who are ready to fill up job positions in society, which will require them to know how to use AI across the board in everything that they're doing. But the fact that big known universities are now running these initiatives is showing very clearly that AI will have a significant impact on education and higher education. The next topic is our day-to-day life with rumors on Amazon, are also developing augmented reality glasses. Actually two separate models of augmented glasses. One is called Jayhawk for consumers. Basically something that will compete with metas, Raybans, and Leys. And the other is actually called Amelia, and it's gonna be for their delivery drivers, helping them with package sorting delivery, turn by turn instructions and so on. So this is gonna be more of a job oriented set of glasses. No clear details are released yet. This was all disclosed by the information, but they're planning the release of their consumer glasses, either late 2026 or early 27, and the workers may start using the glasses in Q2 of 2026. Now combine that with the explosive growth of these kind of glasses, and more and more companies are coming in, including Chinese manufacturers, and you understand that in the very near future, most of the people that are going to be wearing glasses outside are actually going to be recording and analyzing everything in the world around them. Which means everything you do is going to be recorded if you are in public and analyzed if you are in public. And this raises a lot of questions, especially once you're start diving into people next to you in the ATM machine can record everything that you're doing. People in public restrooms will record everything that they're doing as long as they're wearing some kind of a device, courtrooms, et cetera, et cetera. You understand how this can go very, very wrong and you understand how there's gonna be new rules and regulations and social. Agreements of how and what is acceptable, and it is going to be very different than what we know right now. Another area where AI will have a significant impact is on filmmaking and TV series making. I must admit, I projected back in 2024 that in 2025, somebody will start creating AI based series and potentially full films. And this week was the first time I actually found the first AI based series. It was really weird and yet really interesting. The show is called Unanswered Oddities, and it is weird and exciting at the same time. So if you wanna look for something that will show you what the future might look like, how one individual can create an entire TV series and actually capture your attention at least for a while, then go and check it out. And this is just the very first step. But this week OpenAI announced that they will be fueling a groundbreaking new movie called Creators, which is going to showcase the potential of AI to create movies faster and cheaper than traditional Hollywood methods. So as you might remember, there was a creator's short film in 2023 that used Dali, which is open AI's early days image generator to generate the visual worlds of a movie that was then animated by Emmy Award-winning creators. Now, that was a short film that was using very basic capabilities of ai. Well, this is gonna piggyback on top of that. So the new Creator movie is supposed to be a full length movie, and the plan is to produce it for less than$30 million. Now,$30 million sounds like a lot of money, but to put things in perspective, producing Toy Story four costs$200 million. Now the other thing is this film is supposed to take nine months to complete, beginning to end using open AI's technology to generate the characters, the background, and using humans in order to animate and create voice actors and refining the work of the ai. The other reason to involve humans in the process is to ensure copyright eligibility, because as we all know, as of right now, AI generated content alone doesn't have copyright protection. Now even the script that is gonna be mostly written by humans is going to be assisted by AI tools. OpenAI is going to provide the tools, compute and resources, but is not directly involved in the production or any marketing decisions of the movie itself. So again, this is moving exactly as I expected and it's going to have a dramatic impact on Hollywood and other production studios. If you remember when there was the big strike in Hollywood just over a year ago, and eventually they signed the deal, I said, that deal is going to be worthless. Because once others will be able to generate AI movies for a fraction of the cost, these studios, regardless of which agreement they sign, will have two options either to stay with the agreements that they signed or run out of business. Either way, the people who sign the agreement will have no job. And so I don't think we're there yet, but I think we're definitely moving in that direction. What does that mean to the future of filmmaking? I don't know. I assume there's going to be some kind of a premium for actual humans acting in movies, but I don't know how long that can last from a financial perspective. How much more money will people be willing to pay at the cinema to watch a movie that they cannot differentiate from actual, real life and real actors. So time will tell how that is going to evolve, but I can tell you it is not going to be what it is right now. Staying in the content production and entertainment business, a new startup has a crazy ambitious new goal in the podcasting universe. Wright, who is a former Wanderly executive has established a new company called Inception Point ai, which is gearing up to unleash 5,000 podcasts and 3000 episodes per week, all produced for under$1 each. Just to put things in perspective, if you are running lean, like I am, running every podcast episode, takes a few hours to produce, I am not gonna share with you what's my hourly rate, but it is more than$1 per hour, and it takes a few hours and then there's editing, and then it gets released to you for you to enjoy and consume it. So producing thousands of episodes for$1 per episode is in a very different scale compared to actual human generated podcasts. Now their goal is to go after niche topics like weather reports and queer key sports, and the idea is to create podcasts in areas where there are not enough of them right now while providing deep content interesting elaborations on these niche topics where they may or may not exist. This will obviously go beyond that because if they do find success and if they do crack the code on how to generate the content at the right length with the right amount of humor. At the right amount of speed with the right amount of whatever they want because it's AI generated, they can reproduce that and go after any other topic in the world. Now, as a podcaster, am I happy about that? I must admit that, A, I don't think I can do anything about it. And BI think there is going to be, in this particular case room for everyone, right? So yes, will, I have to compete with AI generated shows. I think I already do. There are several different shows out there, especially on AI that are AI generated. I don't think it's at that scale. I don't think there's a kind of money that they're going to raise in order to do that, but it's already happening. And I think just like any other field when it comes to content creation and content generation and content consumption, AI is going to play a big role in that generation. people have to pick and choose who they wanna listen to or follow or view, and they will have to choose how much of it is AI generated versus human generated. I must admit that. I think the next generation won't even care. Well, right now it sounds like a big deal for us. Like, oh, I really wanna listen to a real person. I think my kids won't really matter as long as they're going to find the content interesting, engaging, and so on, they will consume that content regardless of how it was generated, and they won't really care whether it's human generated or not, because they're going to grow into a world where a lot of the content is AI generated. Now staying on the topic of different fields, but now diving more into new features from companies that we know and how they are delivering new value from their existing platforms. Claude just announced that you can now create an edit Excel, word, PowerPoint and PDF files straight on Claude. It is currently only available for Max team and enterprise users. The pro users access, which is the plan I'm on, is coming soon. The demos are absolutely mind blowing, and they're showing how you can take raw data and create really polished outputs, whether it's detailed statistical analysis, complete sets of Excel outputs, and as well as PowerPoint presentations, et cetera. There's been a very interesting example by Ethan Malik on LinkedIn. That is showing how with a simple prompt, he was able to create a whole set of multiple sheets with multiple formulas all connected to one another. That is generating a financial model and projections for a startup company. Now in addition, because it knows how to do all these things, it is a great way to cross format different aspects. So taking inputs from A PDF, converting it into PowerPoint slides, and from that generating invoices, in spreadsheets or whatever other kind of combination that you want, because it understands the content in all these platforms, and it also knows how to generate the output in all these platforms in a much more advanced way than we've seen so far. Now, if you are on one of the eligible levels of subscription, all you have to do is go to your settings, features, experimental, and then activate the relevant functions and you'll be able to use them. You can also connect it to your Google Drive to save all the outputs straight into a specific folder on your drive. Another thing that Claude has announced this week is a memory feature for teams and enterprise plans. So as of September 12, Claude memory capability enables to retain and reference team projects, client details, and work patterns boosting productivity across the board. That's coming from Claude's announcement itself. Now the interesting thing is a little different than Open AI's. Memory Cloud creates distinct memory pools, so different segments and different memory areas for each project. Basically keeping the information not distributed across the board, which is a very important aspect. So if you think about exposing your company's information to Claude for each to remember different things, you do not want anybody in the company to have access to the information that HR has uploaded with everybody's reviews and salaries, as an example. So the ability to keep the memory confined and confidential is a very positive approach, which I do not believe that OpenAI has right now across their platforms. Uh, I believe this is the future though. Where you would be able to have the memory turned on while knowing that it's turned on and accessible just to the relevant people in the organization. Now, just like in OpenAI, you can go and control what Claude remembers. You can view and edit the memory summaries in your settings and instruct Claude to focus on or ignore specific details or specific areas of interest. Now another interesting aspect of this that they announced, which I'm still not exactly sure how it's gonna work, they said that users will be able to transfer the memory details from other AI tools and or from Claude to other tools. So basically you can back up and migrate the memory of Anthropic Claude into other tools and vice versa. I don't think there's a standard right now on how you do that, but I think that's a very interesting approach for the future. In general, I think we're going to see more and more separation of data and tooling on the AI world, which will allow much broader adoption because your data will stay yours and you can then switch tools around as you wish. Right now, companies are investing billions of dollars in making that happen, basically separating the two layers, but I think it will become common and the basic practice as we move forward. Another company that made a big announcement this week is Adobe. Adobe has just launched a suite of AI agents powered by Adobe Experience Platform, also known as a EP. And the main thing that they released is the A EP Agent Orchestrator role, which is a platform that enables businesses to manage and customize Adobe and third party AI agents in and under the Adobe universe, including tools that will be able to understand all the context that is in your entire Adobe universe, understand it and take multi-step actions in order to support different processes. So the two interesting things that I find here is, one is the fact that they are going to allow third party agents to be a part of that environment, and two is that they understand that one of the first needs as you start adding more agents is the orchestration aspect of it. So if you're adding 3, 4, 5, 20, 30, 300. Agents into your universe, somebody needs to manage that. And the people are already preoccupied with doing other things. And so building an orchestrator agent that can control the other agents and define what data it needs to be exposed to, when it needs to be a part of the process and so on, is basically hiring your first manager before you're hiring employees. And that makes perfect sense to me. Another big agent announcement this week is Replitt. Replit, just announced Replit agent three, which is a huge shift from A, what their tool can do, and B, the goals that they have set for their company. Agent three is offering 10 x greater autonomy than agent two, and it allows to do a lot more stuff including build, test, and fix in real time. Now I'm a heavy Replit user. It's my number one go-to vibe coding platform, and I did not know they released it until it popped up on the screen and then I started engaging with it and since I'm working on the platform every single day, it was incredible to see the change in how more powerful the new model is. It is actually running things on its own, testing stuff on its own, giving itself feedback and continually iterating and fixing stuff without even me having to be involved. It was operating the actual application. So it's using a computer use function to actually run the application that it is developing in order to test it and figure out what's going on. It is now writing its own logs and its own breakpoints and testing it on its own. Previously, it did that. It created log capabilities, but then I had to go and open the console on the right side of the browser and navigate to the right place and copy the output of the logs and paste it back into Replit. Right now, Replit does it on its own in its own console, and it does the loop without me having to do anything. Now, according to Replit, this can happen because the new model can run for 200 minutes. That's almost three and a half hours without supervision. In addition, the tool itself can create other agents. So what does that mean? It means that right now you can use Replitcate to create agents for anything in your company. It becomes an agent factory that can then connect to anything you will allow it access to, but in addition, it can spin up its own agents in order to complete the task that you gave it and help Replit create agents that will help Replit create more agents, et cetera, et cetera. Going into this loop that makes my head hurt. But it really is an incredible jump forward in vibe coding and in the ability to understand or to get a glimpse into what the future looks like when the software can generate other pieces of software to help itself solve problems and so on and so forth. Now. In addition, they've already built connectors from Agent three to non platforms such as Notion Linear Dropbox, and SharePoint, which basically makes it a advanced version of make.com or NA 10 and so on, and they're merging it with their ability to write really advanced code. And I think this is the future of everything, right? My personal experience this week has proven to be so impactful on the concepts and how I understand them now versus how I understood them just a few days ago. The ability of software to create agents to help in the process is something I never fully understood, and I probably still don't fully understand. But seeing it in front of my own eyes and seeing it generate its own tests and running them and then solving problems just blew my mind, and I'll take that to any aspect of your work. Anything you are struggling with, any bottleneck you have in your company, you'll be able to open rep it or any other tool that will compete with it and say, Hey, here's the problem that I have. I want you to look at it. I want you to come up with solutions. I want you to develop the solutions, and I want you to test the solutions and then just give it to me when it's working and it will just do it. So this is from Replit, but staying on the same topic of software development and how AI is pushing the boundaries of that forward, a company called Blitzie, who is an autonomous software engineering platform. They look way beyond just the code generation, but the entire software lifecycle. They have just took the top spot on the SWE Benchmark, which is the top benchmark for software creation, and they scored an 86.8% on that benchmark, which is 13% higher than the previous top score. It is also the largest single time advancement of this benchmark since it was established. So to put things in perspective, that's a huge jump of the score on that benchmark. And the way they're achieving it is by enabling hours long reasoning instead of seconds or minutes, like their CTO said the unsolvable weren't actually unsolvable. They just required deeper thinking then System one AI could provide by design. Our platform enables AI to think for hours or days rather than seconds or minutes. Unlocking solutions to problem that stumped every previous approach. They also gave some practical examples that go beyond the benchmark, like how Blitz was on its own, able to modernize four millions lines of Java code in just 72 hours. So on these last two topics, there are two things I want to say. Obviously the forefront of what AI is impacting right now is software generation, but it just gives us a crystal ball on how it's gonna impact everything else in business as it evolves. It is going to be as good at the other things as it is in code generation right now, and it's going to progress at the same pace. It's just a little bit lagging behind because code is a much more structured universe where it can excel. I don't see any reason why it's not gonna happen in other worlds. The other aspect that is very interesting is that it shows that the expansion of AI can be dramatically extended just by giving it more time to think. Basically, the test time compute laws that we have seen less than a year ago for the first time is proving to be something that could be expended way beyond what we know it right now. So when you see ChatGPT or Claude, gR or any of the other tools think, and it usually thinks for two minutes, three minutes, five minutes, it can technically, apparently think for hours and potentially days to solve significantly more complex problems, which basically means that if you be willing to pay for it, you can get unlimited reasoning on really complex problems and solve them. That comes to business or science or education, or any other aspect that you want, which I find a really exciting, B, really scary. And yet this seems to be the solution that both these companies have taken to go way beyond what they were able to do before with ai. And I have a feeling this translates very well to any other field as well. Now staying on the topic of agents and agentic behavior, Google Cloud just released a comprehensive handbook detailing 10 practical applications of AI agents that can enhance business efficiency. Now, this is a very non-techy, user friendly kind of notebook, so if you are deep into the agent world, that's not gonna provide you any value, or at least it'll give you some overview, but nothing more than that. But if you don't know much about agents and what they can do for you right now, this is a great handbook and we're gonna link to that in the show notes. The paper is called AI Agent Handbook, just as simple as that. It's A PDF and you can get it from the Google's website, but the 10 agents effortlessly search for enterprise data like never before. Number two, transform complex documents into engaging podcasts. This is the feature we know from Notebook, LM and Gemini. Number three, generate your best ideas in minutes. Number four, consult an expert on anything. Number five, personalized customer experience at scale with multi-agent ai. Number six, boost marketing engagement and conversion rates. Number seven. Shorten the sale cycle. Number eight, find a bug in your code and fix it with just a prompt. Number nine, simplify onboarding or other HR workflows. And number 10, build your own AI agent. So each and every one of those are tools that they've already developed or platforms that they're developed, allowing users to develop their own agents. And this is just a way to expose people to what agents are and what they can do today. And again, this is nothing too technical. It's a relatively short document, and it's just explaining what Agentic future can look like across multiple aspects of our businesses. Another interesting piece of statistics from that is saying that 33% of new enterprise apps will include agentic AI by 2028. This is up from less than 1% in 2024, and they're claiming that because of that, it'll enable 15% of day-to-day work decisions to be made autonomously. So what they're claiming is that within two and a half years, more than 10% of the business as usual, is gonna be done by AI and not by humans. And like every other projection on technology in history and definitely projections and ai, it probably undershoots the actual reality. So the number actually might be higher than that. Staying on new announcements that are also completely groundbreaking and changing the way we can think of the world and engage with the world. Spin out out of MIT Media Lab that is called Alter Ego, is a wearable device that is non-evasive neural interface that allows you to communicate without speaking and without listening to actual sounds. So the way this works. It captures peripheral neural signals when you are basically using your inner voice. You just have to think about what you wanna say, and it knows how to capture that, and it knows how to transmit that either into a computer or into another alter ego device, which means you can have a conversation with somebody next to you or somebody on the other side of the planet as long as the internet connection is fast enough and you can say things without making any sound, and the way you hear the feedback is through bone conduction audio feedback. So those of you who haven't tried that, there are many headphones today that you can wear, especially when you're cycling and stuff like that. When you want to hear the road that you just put on the sides of your head and through bone conduction, you can hear perfectly fine while still not blocking your hearing. So this new means of communication, like I said, with either devices. So think about voice activation of everything. So I think I shared with you multiple times. I almost don't type anymore. I voice type almost everything in my computer, which makes me communicate with the computer a lot faster. Well now I won't have to voice type at all. I won't have to say anything. I will just have to think about what I wanna say and it will show up on the screen. And the same thing can happen with other people. You can have a conversation with somebody else without saying anything and without anybody hearing any other sound. This is the closest thing to telepathy that I know that exists today, but already exists. It's not science fiction. And this is another thing that I think will become very, very common again, combine that with other wearables that can see the world. By the way, alter ego has cameras that can see the world around you. And you understand that this becomes an extremely powerful tool. Add to that faster internet connection, or the ability to run AI on board of the device itself. And you understand that most people around you will become cyborgs with extreme intelligence capabilities that are just not available today. But the technology enables right now. Now take that beyond the one to one level, to the many, to many level, and think about replacing Slack channels with this bubbles of thoughts on specific topics that the AI itself will be able to understand what relates to who and who needs to know what and when. And we'll be able to transmit that information to the right people at the right time or be able to be queried just by more or less thinking about it or saying in your inner voice, the information you're looking for, and it will be able to query all the other communications with all the other relevant people that you should have access to and provide you that information. Again, my head hurts just thinking about it, but I think this future is not that far out. Another company that made a big announcement from being just a storage device to being an AI agentic universe is box. So box just launched Box automate, which is what they call an operating system for AI agents. And they have deployed several different agentic solutions in the past few months, and now they are delivering a much more comprehensive solution that will allow you to query and use your data across all the box solutions in a much more effective way or as their CEO Aaron Levy has shared AI agents mean that for the first time ever we can actually tap into all this unstructured data. So basically you can ask any question about any data, whether structured or unstructured, and get an answer in a very effective way. He's also saying that their main focus has been to solve the context window limits of just the large language models like Claude or chat pt. But he's saying that there's no free lunches, right? That there's still limitations of the software right now in both cost and infrastructure that is required to make that possible, but that it is possible. Staying on the topic of what is possible or what is going to be possible with ai. Mira Mira's Thinking Machine Lab, which is the companies she established just earlier this year, raising$2 billion in a seed round, has released their first research paper, and the paper is called Defeating Non-Determinism in LLM Inference. So what this research paper is coming to prove is that AI can deliver consistent results. So the myth if you want so far, was that by definition, AI being a statistical model will generate slightly different results every time you run it. So as an example, if you're asking AI to write a report based on a huge amount of data, and you ask it to write it five times every time, you're gonna get slightly different results. Now, they're all gonna be a summary of the data, but it's not going to be the same summary, which is a really big problem both in research and in business. Now, the explanation to that was always that this is the depends on floating point math or GPU concurrency, which is how these things actually run in the backend. And so this was a quote unquote, necessary evil. However, apparently this is not a must. So by enforcing what they call batch in variance. In inference pipelines, they can now generate consistent outputs that are identical regardless of how many times you run it. This solves one of the biggest problems a generative AI solutions had so far when it comes to using it in more data sensitive environments. And by solving this, they are moving the AI world a very big step forward into doing stuff that it just couldn't do before or couldn't do consistently before. Now they have committed to sharing all their findings and they are launching what they call connectionism, which is a blog series to share and post everything that they are doing. Everything that they learned, including their code and insights and everything with to the benefits of the public. Basically like OpenAI was in its early days. And as you probably know, most of the people behind thinking machine labs are executives and top researchers from OpenAI. And since this company was able to raise$2 billion before even explaining what exactly they are going to do, this could be a great segue into the next segment of this episode, which is the crazy amounts of money that is being raised in the AI world right now. And it'll actually be a full circle from the beginning of this episode when talking about the amount of expenses that OpenAI are going to have. So Databricks has just closed a$1 billion funding ground evaluating them at a hundred billion plus. This is just nine months after their previous raised, and it's fueled by their explosive growth to$4 billion in annual recurring revenue. So Databricks just hit this 4 billion a IR in Q2 of 2025, which is a 50% year over year growth with AI products alone, heating$1 billion in revenue run rate. Now, another very interesting aspect, as you know, they're mostly providing database and AI tools on top of these databases. And their CEO Ali Gaze revealed that, and I'm quoting, A year ago we saw in the data that 30% of the databases were not created by humans for the first time. They were created by AI agents. And this year, this statistics is 80% basically meaning that AI is creating its own databases in order to run things more efficiently. Another company that has grown like crazy because of this capability is superb base, which is one of their biggest competitors, which is a database platform that was built to enable AI to spin up databases that it needs, on the fly. And now Databricks is also moving in the right direction. So 80%, four out of every five databases that are being created on the Databricks platform is created by AI and not by humans. This is, again, just makes my brain explode when thinking about combining it with everything that I said before and you understand where the future is going. AI generating another AI that is generating agents that can do things faster, that can build its own databases and so on and so forth. Another huge valuation jump this week is meco, so those of you who don't know, AmeriCorps is a competitor of scale ai, which is the company that was acqui hired by Meta in order to take over their chief executive officer Alexander Wang, that is now running super intelligence. Well scale AI was number one. Meco was one of its competitors. And now because scale AI has been more or less disassembled because they lost a lot of their talent into the Meta Superin Intelligence lab, and a lot of their big clients left them because they don't trust the fact that their data is not gonna be available to Meta. Well, that has drove an incredible growth for AmCorp, who is one of their main competitors and now they've just raised a series C round, which values them at over$10 billion up from$2 billion just seven months ago. But again, while this is an explosive growth, their revenue has also exploded, and they're now at a$450 million run rate revenue surge. Which is a huge jump compared to what they had just seven months ago. Now, again, to put things in perspective, this company was founded in 2023, so two years ago, and they're now worth$10 billion and they're generating$450 million a year. Another company that is exploding in its valuation is perplexity, but perplexity just raised$200 million at a staggering$20 billion valuation, this is just two months after a hundred million dollars raise, and the third raise that they're doing in 2025 with their valuation growing in 2025, from 14 billion to 20 billion. Now their revenue appar, according to a source that spoke with TechCrunch is approaching$200 million a year, which again is very significant based on the fact that most of their users are free users. Now I shared with you earlier about reps release of their agent. Version three. Well, they have just closed a$250 million funding round at a$3 billion valuation, which is nearly tripling its worth since 2023 with their annualized revenue over 50 x what it was just under a year ago. They're claiming to have a community of over 40 million people that is using their platform to vibe, code different solutions. Their revenue is right now around 150 million a RR, which is a 50% jump from the a hundred million a RR they had in June. So this is a 50% growth in less than three months for a company that's not growing from a hundred thousand dollars, but from a hundred million dollars. This is just showing you the appetite that the world has today for these kind of solutions, and it's showing you why these crazy amounts that they're raising is actually in tandem with the revenue that they're generating. And this connects to the raise we discussed last week of Sierra raising a new funding round at a$10 billion valuation. Again, that company was established by Brett Tyler, who used to be the CO CEO of Salesforce. And he's on the board of OpenAI and he left Salesforce in early 2023 to establish this company that builds agents. By the way, what's interesting about them, going back to connecting a few dots of this episode, of breaking traditional way that software is being paid for. Sierra charges you only when the agents autonomously resolve customer issues and it's free if it's escalated to humans. So basically you are paying not for usage, you're paying for success. Think about how profound is that they trust their agents so much that you don't pay for them if they don't actually solve customer service issues. And that explains why they're growing so quickly Now one of the things that Tyler said is that he agrees with Sam Altman when he said that someone is going to lose a phenomenal amount of money. We don't know who, and a lot of people are going to make phenomenal amounts of money.. aNd he's comparing the AI frenzy to the internet boom, back in the early two thousands where huge flops like pets.com, who raised stupid the amount of money and completely disappeared coexisted with giants like Amazon and Google, who we know that came to conquer the world. And so the reason they're saying that is that there's obviously a bubble, but it's a very different bubble than the bubble that existed back in the 2000. And I lived through it. I worked at a startup that in early 2000, had 20 something employees and I was one of them. And then by the late that year, we had over a hundred. So we grew by almost five x only to come back down to like 35 people a few months later when the whole thing collapsed. Now, do I think it's the same scenario right now? The answer is no. And the reason I think the answer is no is that there is real revenue that is being generated by many of these companies, very significant revenue, as I shared with you on the examples in the past few minutes. That being said, there are many, many, many companies that are raising significant amount of money that are going to evaporate and disappear, and that will lose a lot of money for the VCs and the platforms that are investing in them. And it's not always easy to know which one are going to survive and which one are not. Now there are more and more discussions and articles out there at a potentially looming AI winter. There was an article on Fortune magazine called, is Another AI Winter Coming? There were several other articles like this. If you remember, we talked about the MIT study that claimed that 95% of AI pilots don't provide any value, which I discussed with you in the past weeks that I completely disagree with the way they have performed the research and the conclusion that they got to. But it was still out there and it was published by MIT. So people take it seriously. And then you have statements like the one we just shared by Tyler, or the statement by Sam Altman that said some venture backed AI startups were grossly overvalue. But the interesting thing in this article is that it's showing previous cycles in which there were significant hype around ai. That then completely disappeared over, starting from 1958, where Frank Rosenblatt, who was a AI researcher, claimed that AI would soon recognize people and translate languages instantly. That was in an article again in 1958. Then obviously AI slowed down, then there was another spike, then another winter in the sixties and seventies. Then AI leader in the early seventies called Marvin Minsky projected that in three to eight years, we will have a machine with the general intelligence of an average human, and we know that didn't happen. Then another winter happened in the eighties after the US government spent$1 billion at that time, which is like$10 billion right now on what was called expert systems, which were AI solutions for businesses, and that was abandoned completely afterwards and so on and so forth. So there's several different parallels to what happened back then, both in means of infrastructure investment as well as in projections that are being made. I think it is very, very different this time around. I think the technology is actually there. I think a lot of the infrastructure that did not exist back then, like high speed internet, like data centers that are co connected, like GPUs, like the transformer architecture, all these things that enable what we have today that is actual and not conceptual. And I think that's why this article, while it's an interesting read, is not really connected to reality. Because I think what we have today is real and not just projections. Now are the projections of when we're gonna hit a GI and a S. I could be stretched. Yes. Whether it's two years, five years, 10 years. This is still debatable, but I think that timeframe will yield these kind of results. And like I said, multiple times, I don't think it matters. I think every single week we have new breakthroughs that are enabling things that were not possible before. We just shared several of them on this episode that just happened this past week. Each and every one of them opens a whole universe of AI use cases that are in their infancy and are gonna keep on growing and developing. I personally don't think there's an AI winter coming. And I actually think that Q4 of this year is gonna be a complete madness when it comes to new releases from the big labs as well as tools built on top of them. That will continue the crazy trend that we have seen in the last two years. A few additional pieces of information about the infrastructure that is running all of this. Microsoft just signed a deal with Naus, which is a company that provides data centers and the deal is worth$19.4 billion to support and host AI data centers for Microsoft. If you remember early this year, there was a whole thing about ai Winter is coming and it's gonna be slowing down because Microsoft has canceled several builds of data centers that it was planning to do. And then Satya basically came and said, no, we're not slowing down. We're just switching the strategy from building to leasing. And everybody thought, that's an excuse. Well, that was not an excuse, and here it's happening. So they just signed a deal with a single company for$19.4 billion worth of data center hosting that could grow even beyond that number. Microsoft also signed a very interesting deal with the US government basically providing the US General Services Administration known as GSA to provide over$3 billion in AI cloud services for federal agencies, potentially saving them another$3 billion by the end of next year. So for a total of$6 billion of savings. And what they're doing is they're providing Microsoft 365 co-pilot at no cost for 12 months with a potential extension of another 12 months with a tailored version that's gonna be tied to specific needs of the federal government. In addition, they're providing discounts for Microsoft 365 for Azure Cloud Services for Dynamic 365 and for cybersecurity tools for 36 months. Satya Nadela, their CEO said the following, with this new agreement with the US General Services Administration, including a non-cost Microsoft 365 co-pilot offer, we will help federal agencies use AI and digital technologies to improve citizen services, strengthen security, and save taxpayers more than$3 billion in the first year alone. Now, if you remember, OpenAI did almost exactly the same thing just over a month ago. On August 6th, OpenAI shared that they're going to provide GSA and other federal agencies, GBT for all employees for$1 per year per agency, which is basically free. And so if you remember in President Trump's integration, all the tech giants were sitting next to him. And if you thought, what the hell is happening here? Well, this is what's happening here. Trump is a businessman. That's what he has done his entire life. That's what he knows how to do, and that's how he runs the government as well. So he is giving them an environment to thrive with a very supporting environment, with very supportive regulations, with very support, budgeting across different things. And in return, he's getting benefits, huge benefits, billions of dollars of benefits to the US government and in the long run to the US economy. Now whether you like the way Trump is acting or not, there are definitely big benefits to the government and hopefully, like I said, in the long run to the US economy, from the way he's acting. Staying on the topic of infrastructure, Oracle Q1, 2026 results were just released and they are showing a staggering$455 billion in remaining performance obligations with their CEO FRA Kaz stating that they signed four multi-billion dollar contracts this quarter. And this is not with unknown companies that participating. The companies they signed these deals with are OpenAI, XAI, meta, Nvidia, a MD. All of them are a part of this crazy frenzy for additional compute, and Oracle is one of the bigger winners, which is really surprising. Oracle is kind of one of those old school companies that provide, that has been doing the same thing for the last 30 years or so, and now their Oracle Cloud infrastructure is absolutely booming and they're projecting that their cloud infrastructure revenue is gonna go from$18 billion this year to$144 billion in 2030. This has sent their stock 40% up in a single day. Making Larry Aon, their founder, the most wealthy person on Earth surpassing Elon Musk. But so far in this episode, all the companies we talked about, or most of the companies we talked about are US based, but the Chinese are not stopping either. So both Alibaba and Moonshot ai, which is a company that is backed by Alibaba, but they're their own company and startup have released models this week, both of them made it into the top 10 at the LM Arena, text Arena rankings. Now to put things in a bigger perspective, Chinese companies have five or six models in the top 10. Many of them are co-sharing the eighth location and the 10th location with other US-based companies. So if you're asking how there can be so many models, what's happened to all of them? So the highest ranking they have is six, and then several at number eight, nine, and 10. But Quin three max preview is now at the sixth place on the text arena ranking, which is the top Chinese model right now. The two crazy aspects of it is that last year there were zero. Chinese models and Alibaba had zero AI models at all, and now a year later, they are at number six on the global ranking and they are continuing to develop at a crazy speed. And that's without having access to the latest GPUs by Nvidia. The other model kin, K 2 0 9 0 5, which is the date that it was released, is tied in a place with other competitors such as Deeps Seq, R one, and Grok four. But again, in the top 10 ranking. So what does that tell us? It tells us that China is very close behind when it comes to developing advanced models and that they are very fast in catching up again. Alibaba had no models a year ago, and now they are at number six. Alibaba has all the benefits that Google has, right? They have the same access to compute, they have the same access to talent, they have the same access to resources, and they have the same access to data because they're the parallel of Google in the Chinese universe. So they have a huge benefit across everything that they need in order to build a very successful AI company. The only thing they're lacking is access to the same quantity of advanced GPUs as the Western platforms, and yet they're able to come very, very close. So I think the race is definitely on, and these models are gonna keep on getting better, and the fact that they don't have access to these GPUs just makes them evolve and innovate faster across other aspects in order to close the gap and now to the final and somewhat fufu news of the day, albania just announced that they have appointed Della, who is the world's first AI created cabinet member, and Prime Minister Eddie Rama has shared that in a social party meeting just this week. So if you're asking yourself what this cabinet member is supposed to do, it is supposed to oversee all public procurement processes to ensure tenders are 100% free of corruption, which is a big problem that Albania is facing right now. While Diala does not hold any voting rights in the cabinet, it is going to be a cabinet member and it's going to focus on overseeing government procurement. And I think this is a very interesting approach. If you think about different types of governments that we have around the world, many countries appoint cabinet members, not based on their political agendas, but based on their experience in order to provide the best public service. And this is a great example on how AI can be used and leveraged to actually do good for society without taking away the decision making from humans. I really like that approach. Whether it's actually going to work or not. time will tell. It will be very interesting to follow, but I expect we will see more and more of that in the next coming years. That is it for today. We will be back on Tuesday with a fascinating episode comparing from my personal experience, Gemini versus ChatGPT and seeing where each and every one of them excels, which of the tools you should use for which use cases. I'm going to be sharing multiple of these in detail, so come and join us on Tuesday. If you are enjoying this podcast, please open your phone right now and click on the share button and share it with a few people that can benefit from it. I know I'm saying it's every week, and I know some of you're doing this because you are connecting with me on LinkedIn, which I highly appreciate. So any of you who wants to connect with me, please connect with me on LinkedIn. I really love hearing,, from you, the listeners and what you think and what you consume and what other topics you think I should cover and so on. But also share it with other people that can learn about AI and how AI is progressing and how it's going to impact, everybody's lives. Because I do believe deep inside that the more we understand this, the more we can impact push the AI future to be a better future versus a very scary future. So play your role, share this with people. Rank and rate the podcast on Apple Podcasts or Spotify. And keep on experimenting with AI and share with world, what you are learning because that's your way to help in this process. I appreciate each and every one of you for listening and have an amazing rest of your weekend.

People on this episode