Leveraging AI

217 | Sam Altmam sharing everything about the near future of OpenAI, The first AI phone is here, a robot with an artificial womb, and many more important AI news for the week ending on August 22, 2025

Isar Meitis Season 1 Episode 217

Are we watching the rise of a trillion-dollar empire—or sleepwalking into another dot-com crash?

From OpenAI's wild ambitions and Sam Altman's eyebrow-raising dinner confessions, to Google's AI-powered Pixel 10 and Meta's highly questionable chatbot ethics—this episode dives deep into what every forward-thinking business leader needs to know now.

Because the truth is, whether you’re building, scaling, or just trying to survive the AI transformation, the ground is shifting fast—and not always in ways you’d expect.

In this episode, Isar breaks down the latest AI news with clarity, strategy, and the occasional raised eyebrow. You’ll hear exactly what matters, what doesn’t, and how to separate hype from opportunity in a world moving at LLM speed.

In this session, you'll discover:

  • What Sam Altman really said about GPT-6, compute shortages, and raising trillions
  • Why Google’s Pixel 10 might be the first actual AI phone—and what it means for your data
  • The OpenAI vs. Google browser war (and the subtle takeover of web search)
  • Why Meta’s leaked AI chatbot guidelines are more disturbing than anyone expected
  • The death of entry-level jobs? New data shows how AI is upending the talent pipeline
  • The “Shadow AI Economy”: How 90% of employees are using AI—even when leadership isn’t
  • Lessons from CEOs: The right (and wrong) way to lead your team into the AI future
  • Why we urgently need global AI guardrails—and how the current path is dangerously unregulated
  • And yes, a pregnancy robot is in the works. We’re not kidding.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Meitis, your host, and we have a lot to talk about. This week, there were no big releases, but a lot of things have happened. We are going to discuss everything that Sam Altman shared with reporters in a dinner that he invited them to on the wake of the release of GPT-5 and the future of the GPT-6 and a lot of other things that on his mind and the plans for open AI in the short and medium term future. We're going to talk about the first real AI phone, so what Apple Intelligence was supposed to be and never actually delivered. Google just released Pixel 10, and it has some really incredible AI capabilities built into it. We're going to talk about the impact on jobs and an interesting survey and research done by MIT on this topic. We're going to talk about a highly controversial guidelines for meta's AI and the reorg at meta ai and a lot of rapid fire items like releases Fundraising and so on. But before we get started, I want to share something exciting with two different groups of people in the world. The first one are people from Australia. So while most of the listeners to this podcast, 63% if to be specific, are from the us. If you look down to the city level, then the first top three cities in the world, more or less consistently for many months now, are, Sydney, Melbourne, and Brisbane. So thank you Aussies for being solid and consistent listeners of this podcast. I really appreciate you listening to this, uh, regularly on the other side of the world. And please connect with me on LinkedIn. I want to get to know you and know why you're listening, why you're following this podcast and so on. The other interesting news for listeners of this podcast, I am going to be in California delivering AI training to two different companies. One in San Francisco and one in San Clemente. And I'm going to be arriving to San Francisco on Monday while I'm doing the training on Tuesday. And so on Monday evening, I am planning to meet with you if you are in San Francisco and you wanna meet with me. So how is this exactly going to work? Since I don't have a final location yet, connect with me on LinkedIn and first of all, recommend the location. I'm going to be in the main area of downtown around Market Street, close to the ferry building, so anywhere within that area works. So if you can suggest a location and let me know that you're interested, once I'll decide on allocation, I will let everybody who will contact me know exactly where we're going to be, and I think that could be a really fun in-person event, just getting to know each other and talking about AI or whatever other topic that we want. So if you're listening to this before Monday evening, as I mentioned, please reach out to me on LinkedIn, but we have a lot of news to talk about, so let's get started. The first topic that we're going to talk about is a dinner that Sam Altman together with additional executives from OpenAI, such as Greg Brockman hosted in San Francisco, together with reporters from multiple media channels. In this dinner, Sam shared a lot of his thoughts on the near and medium future for OpenAI and potentially the industry, and that gives us an amazing opportunity to dive into what Sam thinks is coming and it doesn't think he knows it's coming because he is already working on it. And so it's great for us to get that understanding. The different outlets reported different things, but I took all of those articles from all the different sources, dropped them into Notebook, lm, and got a summary of all the major points, which I'm going to share with you right now. The first one has to do with the current status of compute and how much they're lacking, a lot of it, and how much they're going to invest in that in the future. and the quote from Sam Altman is they're going to spend trillions of dollars on data center construction in the not very distant future. Now Sam also anticipates economists that will find this so crazy. It is so reckless. But he stated, we'll just be like, you know what? Let us do our thing. So what Sam is saying is that their demand for training and their demand for inference is going to continuously grow. And as a result, they're going to, and I'm quoting again, it'll force them to spend maybe more aggressively than any company who ever spent on anything ahead of progress. What he's basically saying is that regardless of how much they invest, the demand over time will grow even faster. And hence, they're going to invest a lot more than may seem reasonable to a lot of other people, and yet, that's the direction that they're going to go. Now, how are they going to raise trillions of dollars? Well, what Sam hinted is the following, he said, I suspect we can design a very interesting new kind of financial instrument for financing compute that the world has not yet figured out. So This is not gonna be VC money. It's not gonna be investment banking, it's not going to be venture capital. They are thinking of new ways to raise money that will fit this new model. Will that be a pay to play? Will that give you future discounts maybe on inference or stuff like that so the whole world can participate in that investment? I don't know they haven't clarified or provided any additional details of what that means, but it's very obvious that they're thinking this way on how they can raise trillions of dollars in order to do this. And as I mentioned, not in the too far future. if you remember months ago, Sam Altman released a blog post where he talked about potentially raising$7 trillion and everybody thought he was joking or exaggerating. Well, it sounds like a very solid plan as of right now, and I think they're just trying to figure out how to do it, but they're definitely planning to do that. Altman also admitted that the rollout of GPT-5 was mishandled and he stated, and I'm quoting, I legitimately just thought we screwed that up, and I think many people agree. If you wanna learn more about that, just go and listen to the episode. From a week ago. We shared a lot of information on the negative sentiment to how GPT-5 was released, regardless of how good or not good the actual model is. He also talked about GPT-6 and in generally spoke about the fact that they already have significantly more advanced models that they are developing and using and they cannot release them because of the compute constraint that they have right now. But specifically he said the GPT-6 is already on the way and that will arrive much faster than the gap between GPT-4 and GPT-5. Another thing that he focused on is emphasis on memory. He said, and I'm quoting, people want memory as a key feature. Basically allowing chat GPT to remember you as a person, your preferences, and what he's saying, that in the future, this enhanced memory capability will also know your style and your political preference and so on. Which leads us to another thing. He said that they are planning for the model to be very adaptable, so its tone and personality will be adapted to your needs, including whether it's gonna be super woke, or conservative. So you'll be able to basically create the model to fit your needs or your beliefs, which I think has pros and cons, right? The benefits is that the model will work based on your specific personal needs. The disadvantage is we are creating bigger and stronger. Echo chambers for anybody on what they think versus generating a more collaborative environment that will be able to debate, but agree on different topics. So I definitely see pros and cons to this approach, but this is the direction that they are taking, which is going to be a lot more customized and personalized ai. Now, something that he mentioned, again that was mentioned shortly after GPT-5 was released, that despite the fact that the GPT-5 launch process was very negative in a lot of the decisions that they made, like removing all the old models and some other aspect, Sam stated that the demand for their API doubled within 48 hours of the G. Now that growing demand is also leading them to significantly growing profits. So I'm opening parentheses for a minute. OpenAI, CFO, Sarah Friar told CNBC that July was the first month they hit a billion dollar in revenue in a single month. That puts them on a annual pace of over$12 billion a year, which is four x what they had last year. She also stated, and I'm quoting the biggest thing we face is being constantly under compute, basically highlighting what I shared with you before, that they need to invest a lot more money in getting more compute. Otherwise, they can serve a lot more people and a lot more demand than they're serving right now. Back to Sam Altman. He's saying that despite the fact that they are projecting that very quickly, they will be on an annual revenue rate of$20 billion a year, potentially getting to that rate this year, meaning getting close to$2 billion a month by the end of 2025. They still remain unprofitable. But the really interesting and promising thing that Sam said when it comes to profitability is that they are profitable on inference. Meaning when they deliver ai, if you put aside, you know, whatever other overheads and investment in future endeavors, and obviously training future models, which is a huge investment, they could run the business profitably. So if they just run GPT-5 without developing new things, they can be a profitable business, probably a very profitable business, and that's a very good sign that relates to whether this is a bubble or not. And we're gonna touch, like I said, a lot more of that in the next segment of this episode. Now Sam also spend time on talking about what is going to happen beyond ChatGPT and what are the things they're focusing on the future when it comes to additional things they're going to deliver to us. So we shared with you, couple of months ago that Fiji SImo is now the new CEO of applications from a lot of hints. She might later on turn to be the CEO of OpenAI. It sounds more and more like Sam wants to take more of a strategic role versus a CEO role and probably will stay a part of the board, probably will stay the visionary behind it. It feels more and more like he wants to be the head of Alphabet versus the head of Google, if you understand what I mean by this reference. And then letting Fiji SImo or somebody else run it. But now for the things they're going to deliver, they're planning on releasing a AI powered browser, meaning your entire engagement with the internet will happen on Cha Similar to other releases in the market, the most prominent one is Comet from Perplexity. That is actually a really cool tool that I just started using in the last 48 hours. I need to thank Ari Suran, the CEO of Sonance, for pushing me to make that step. It's something I've been procrastinating for, the last few weeks. And he said, you gotta try this out. It's doing some incredible things. And I agree. After I played with it, late last night, I was able to do a lot of really cool things with it. So the need for an AI enabled browser is very obvious, and I think we're only going to have AI based browsers in the very near future. Nothing else will exist because it just won't make any sense. So this is one thing that OpenAI is planning. as many rumors suggested before, they're also potentially looking at a social media platform, or to be more specific, he didn't say they're developing it, but he shared. That will be interesting, and I'm quoting to create a much cooler kind of social experience with ai. Does that mean they're working on one? I don't know, but that's another thing I shared with you last week that Sam personally not related to open AI, is launching Merge Labs, which is a company that will compete with Neuralink, that will allow neural interfaces, basically allowing AI to talk directly to your brain. Or as Sam said, just imagine being able to think something and have Cha Does that sound complete science fiction to you? As of right now, probably, yes. Will that be the reality of all of our kids? Very likely. Sam, also related to the hardware device that they're developing together with Johnny i's team and he said that it is absolutely beautiful and he added jokingly, if you put a case over it, I will personally hunt you down. Basically meaning he really believes it's an incredible, useful and beautiful device, which is everything you expect from Johnny Ive to deliver. especially when it comes to combining it with a lot of AI capabilities. They haven't shared anything else, so we still don't know what it's going to be. From previous conversations, we know it's going to be a third device. It's not supposed to replace your computer or your phone. It's going to be a AI add-on third device. That will do, again, probably incredible things when it comes to that. We're going to talk later on in this episode on the first real AI device that you can actually buy right now. Then the elephant in the room that we need to talk about and we're gonna dive into in this coming segment is that Sam Altman expressed his belief that the AI market is currently in a bubble. He said, and I'm quoting, are we in a phase where investors as a whole are overexcited about ai? My opinion is yes. In this section, he repeated the word bubble three times in 15 seconds, and he was talking a lot about the insane valuations and irrational investor behavior, especially when it comes to small businesses and how they're getting started. And he added, and I'm quoting again, some investors are likely to get very burnt here. However, he affirmed that he has complete conviction that, and I'm quoting again, AI is the most important thing to happen in a very long time, and the value created by AI for society will be tremendous. So, does that sound a little contradicting? Maybe, but I'll try to elaborate what he probably means. He thinks that the overall investment in AI is not a bubble. What he's referring to is that companies of three, four people as smart as they're going to be, are raising billions of dollars without any projection of future revenue. This is very different, as an example, than what OpenAI and philanthropic are doing that are generating billions of dollars two years after they launch products, which is a very different kind of scenario. So I think what he's referring to is some of the investments in ai, as crazy as they may sound, including them planning to invest trillions in data centers make sense, and yet investing billions in startups that have nothing yet makes no sense. And I tend to agree with that opinion. And these statements from Sam Altman are a perfect segue to discuss in a broader way the topic of is the AI a bubble similar to the.com era or not? And they are different inputs and different opinions from different people around the world. Obviously just the fact that Sam Altman said it's a bubble sent the stock market coming down a few percentages, especially on the tech companies. But this bounced back this past few days. So let's break this down to inputs from multiple different sources. First of all, the major tech companies have been pouring crazy amounts of money into their capital expenditures when it comes to serving the AI demand. So Microsoft is targeting$120 billion in infrastructure. Amazon topping a hundred billion dollars Alphabet raising its forecast to$85 billion this year. Uh, and meta is lifting its CapEx range to around$72 billion. So if you are just looking at these four companies, you are getting close to$400 billion of capital investment in AI infrastructure. Again, just from four companies as big as they are, it's still an insane amount of money. Add that to investments of hundreds of millions or billions in companies that are before even showing signs of potential revenue. And you understand that the amounts of money that gets poured into the AI are nothing like we've ever seen before. Alibaba, co-founder, joy t warned that there's a brewing AI bubble in the us He said that already in March, and he specifically referred to the crazy investments that most of these companies that I just mentioned are doing into data centers without knowing if the demand is actually there. I actually think the demand is there. I think the demand will be growing, and I think the release of GPT-5 and how much they lack compute right now is very obvious. Now let's look at some opinions from the US while Bridgewater Associates, Ray Dalio and Apollo Global Management Chief Economists, Torsten Slack, have also issued warnings similar to what we heard from Alibaba. And Slack is suggesting that the AI bubble is even bigger than the internet bubble in the early nineties. But there are other investors who have exactly the opposing opinion. As an example, Wade Bush's, Dan Ives sees the CapEx surge as a validation moment of the AI sector. He believes that the long-term impact is actually underestimated. Rob bro from Citigroup says that there's a very big difference between the current investment and the.com bubble and is noting again, as I mentioned before, that today's companies boast very solid, earning, very strong cash flow, which is something that did not happen in the dotcom era for most companies getting those investments. So does that extend to every AI company out there? I think the answer is no. I still think that what Sam is hinting to is very, very true. There are a few companies who will see amazing returns, and those investors who invested in them are going to see amazing returns over time. And I do think that many, many, many investments are gonna go down the drain. What I see as something that is gonna start happening in the immediate future is that many investments in smaller startups that are doing incredible things and products are gonna go down the drain just because those things that these startups are developing are going to become a feature in the next release of Chui or Gemini or Claude, et cetera. So I think the rate in which these companies are releasing incredibly powerful capabilities as small features are things that other companies have invested. 2, 3, 4 years in developing while raising tens of millions of dollars to develop these things. And now they won't be necessary because everybody are just gonna use what open AI is going to give them. So are we in a bubble or not? To summarize the segment, I think the answer is, it depends on what aspects of the AI market you're looking at. I think as a whole, it will keep on growing and the investment will keep on growing and we'll see amazing things happening from an infrastructure perspective and from capabilities perspective. And I think a lot of investors' money are going to go up in flames because they're investing way too much money in things that are not sustainable. A few interesting additional updates since we're already talking about open ai. One of the predictions that Sam Altman made is that pretty soon, billions of people a day will be talking to ChatGPT, potentially exceeding all human conversations. And this connects to the point that I mentioned earlier as far as the need for diversification and customization of the ChatGPT experience or any AI experience. And Sam said specifically, and I'm quoting, there will have to be a very different kind of product offering to accommodate the extremely wide diversity of use cases and people. OpenAI are also planning to add encryption to their conversations as a first step. They're planning to do this for the temporary chats when something like this is happening is unclear, but it's clear that they're going in this direction. They also share that they're going to retire the old original voice mode and just keep advanced voice mode. I must admit that while there's been a whole controversy on X in Reddit on people who love the old voice mode, I don't get it. I actually switched to advanced voice mode long ago and I've never looked back, but if you like the old voice mode, you should know that it might be going away as of September 9th, 2025. That being said, they also retired GPT-4 0.0 a week ago and then brought it back, so I'm not exactly sure how that's gonna roll out. At least they're giving us a little bit of a heads up. To be fair, the only thing that I'm really disappointed with in the new voice mode is that since the launch of GPT-5, it's been crashing all the time. So I use voice mode daily. I think it's one of the most amazing ways to engage with AI as far as just brainstorming, raising, idea, developing new concepts and so on. And it's just been not working for me since GPT-5 launched. It actually crashes every single conversation I have with it, usually within a minute, sometimes two, which is definitely subpar to what I was used to before. So I really hope they'll be able to fix that in the near future. There has been an interesting article in the information talking about how opens AI's usage of search data is helping it to enhance its AI offering and sharing how much that competes with Google, which makes perfect sense. Think if Open AI launches a browser in the near future, which they're planning to, as I mentioned, it is going to take even more traffic from Google. But let's connect the dots to something that we shared last week. If you remember when GPT-5 was launched, I shared with you that a research found that 77% of chat GPT users basically use it as a web search tool versus enjoying all the amazing benefits that AI can do, which means they're not really using ai, there are just searching the web. But that being said, that means that if they have 700 million weekly users, five to 600 million people are using ChatGPT for search. That does two things. A, it gives OpenAI a huge amount of knowledge on what people are actually interested in and how to serve it to them in an effective way. But it also means that when they're doing these searches, they're not probably using Google, which means 500, 600 million people are weekly searching ChatGPT instead of searching Google. And these numbers starts to get significant. And obviously the other 23% are also using AI for search. They're just also using it for a lot of other things. So that obviously puts a lot of pressure on Google and its main revenue source, which is ads based on what people search. If a lot of people are going to transition to open ai, then that puts the current business model that makes Google completely break, and I assume Google just knows they don't have a choice and they have to play this game and find different ways to generate revenue potentially by becoming the same thing that Open Air wants to become, which is a browser that serves everything with agents built into it, and find different ways to monetize it. Now, to make this even more interesting in that dinner with reporters, Sam Altman said that if Google will be forced to sell Chrome, they will definitely be interested in buying it. It is by far the leading browser in the world today. And obviously having access to this kind of distribution are going to put OpenAI in a very different position they're in right now when it comes to getting their offering in the hands and in front of the eyes of a lot more people, Which raises the question, are we taking the distribution channel of one monopoly and giving it to the contender monopoly to now control it? So I dunno if that will even happen. Will the government actually force Google to sell Chrome and whether they will then allow open AI to actually buy it? But it's definitely an opportunity for open AI if it does happen. Now since we mentioned Google, it's a great segue to the release of Pixel 10. So let's look at a little bit of history before we dive into Pixel 10. Pixel eight was the first phone that had AI onboard using the tensor chips. And Pixel 10 is just a big jump ahead from those capabilities. So Pixel 10 runs on the new G five tensor processor, which means it can run Gemini nano model straight on the device without sending the data anywhere, which allows you to do a lot of really incredible things. So if you are trying to imagine what AI on your phone can look like, this new release by Google gives us a great glimpse of how this is going. The first and coolest feature, and maybe most powerful one is called Magic Q. Magic Q proactively looks at everything that you're doing and understands what you're trying to do and what your needs are based on all the information that it's getting from the different apps, and it proactively surfaces information to the screen, like different pop-up tickets that are anticipating what you will need in providing that to you from multiple data sources. So think about the ultimate personal assistant that constantly monitors your communication across multiple channels together with your calendar and can pop up things to your screen saying, Hey, I saw that you were chatting with this person about playing pickleball with him this afternoon, but you have a meeting at the same time with so and so. Would you like me to send an email and reschedule the meeting? What would you like me to do? Stuff like that. And I think this is extremely powerful and these are, like I said, the very first steps of that. I think Google has the biggest lead in that right now. They have the lead on that, on the web as well. I use. Gemini across all the G Suites tools, and I have a lot of clients who use the Microsoft parallel of copilot and it's not even close. And so I think Google are pushing very hard to unite the information from all the different things they have their hands into, which is a lot. Combine that with the phone that has multiple third party application that are running on it, and location services that are turned on for everything. And you understand that if Google will figure this out, they will be able to deliver extreme value. And the way we're going to pay for it is with our privacy. Like everything else in Google's history. They also added many other features like a call screen that prevents unwanted interruptions and spam calls by identifying the callers and what their intents are before you even answer the phone. I've been using voice screening from Google on my Pixel eight Pro for a very long time now, and it's extremely helpful in just screening calls before even have to take them. They have a live translate feature that basically mimics your voice and tone on what you're saying and answering, but says it in a different language to the person on the other side in near real time. It currently supports English, Spanish, German, Japanese, French, Hindi, Italian, Portuguese, Swedish, Russian, and Indonesian. So you'll be able to speak with any person who speaks any of these languages. It will sound like you, but it will sound like you're speaking their native language. I think this is absolutely awesome. Combine that with Bluetooth headphones, and you'll got the closest thing to the Babel Fish from the Hitchhiker's Guide to the Galaxy, where it's a fish that talks to your brainwaves directly and can translate any language to your language back and forth. So kudos to Douglas Adams for inventing this feature many years ago, but now kudos to Google for actually implementing it. Now, there's also a new variant of Gemini live audio model that is basically your ability to have voice conversation and you will adjust to your tone. So whether you're excited or concerned and so on, and it will reflect on that and will provide answers based on your emotional level. Again, on one hand, really exciting. On the other hand, really scary. But this is the direction it is all going and how it's gonna be integrated and running straight on your phone. an interesting piece of information that was shared as part of that is that Gemini live conversations are five times longer than text-based interactions. That does not surprise me at all, and I think the days of using our thumbs in order to engage with other people or applications are numbered. I switched almost completely to everything that I'm doing, including on my Mac to voice typing, and I'm typing significantly faster, and I can just do a lot more things in a single day because of that. And I think with the AI capability to understand us, understand our intent, connect that to the information that it has in the backend, will allow us to just be significantly more productive across more or less everything that we do. Another interesting feature on the Google phone is that it has a new feature for taking a message. Basically think about it as an AI enhanced voicemail where somebody who leaves you a message, it will understand exactly what the message is about, and it will give you a summary and next steps recommendation of what you should do based on the message that was left for you. On the camera side, they also introduced a lot of cool stuff like camera coach. So when you are looking through the viewfinder, basically seeing what the camera is seeing before you take a shot, it will guide you how to get better shots, how to zoom in, turn a little upwards, move to the left. Focus on that. Change the lighting in order to get better results with the pictures that you're taking. Basically you can have an experienced professional photographer giving you hints on how to shoot better photos. After you're taking the photos, you'll be able to edit them with voice and or text it's without even touching the screen, just saying, I want to remove or add this, or I wanna change that. They have a lot of really cool examples on their website for the release, like moving sunshine, glare from one image, or adding and removing things from an image in a different one. They also have a new feature, again, that is somewhat troubling, that is called add me, which allows you to add yourself into pictures you're not in. That might be the end of group selfies, so if you want to be in the picture, but you also want to take the picture right now, your only option is to take a selfie with you in the image, and everybody else is usually smaller in the background. Well, now you'll just be able to take the picture of everybody else and add yourself into that image. They also have a really cool feature that's called Auto Best Take, that looks into multiple. Images that it's actually taking the background before you're doing anything, and it combines the best faces of people from all of them into one image. So if you're trying to take pictures of 20 people and you want them all to be smiling and all of them to have their eyes open and all of them to look into the camera, that's almost never happens. And that's why we take multiple pictures, then we try to pick one. Well, now what it's going to do, it is going to take up to 150 images behind the scenes without you knowing and literally superimposing the best face of each person into a final picture. They also have a lot of improvements on the video side, and this can shoot highly stabilized video. Even if you're running chasing somebody and trying to shoot an action video, it can stabilize the video, it can shoot eight K and it can deal with very problematic and hard to shoot lighting conditions. The good news is all of that is for the first time, pixel 10 phones will implement the C two PA, which is a standard that is establishing the origin of the data and also shows that it has been manipulated with ai. So every one of these changes will be stamped to say AI has manipulated this image. I really, really, really hope that this will become a mandatory requirement as a law by the government. Otherwise, we'll have zero ability to know which images are real or not, and now that it will become available on every phone and every camera and every device that we carry. And with a click of a button and or a few words in English, you can completely alter the image. We are heading into a world where we will not be able to know. Whether any image is real or not, which leads to a huge opportunity for misinformation and disinformation. And as I mentioned, I really hope it will become a law very, very quickly that will force the creators as well as the distribution channels to include the statement that's saying that this was AI manipulated, or AI created. They also added a lot of cool capabilities for search. We gotta remember at the end of the day, this is Google, but now you can do visual search. And the visual search can do a lot of really cool things, such as it can identify things live in the image. So think about being in a street in a foreign country and not being able to read the signs, but you're looking for a bakery, you can just open your camera, look around, and it will show you highlight on the screen. What is the bakery one? Another example that they showed is a guy opening the engine hood of his car and asking where the air filter is, and it tells it exactly what to look at inside the engine of the car, and so on and so forth. So I think this is actually a very powerful feature. Another thing that you can do is while you have a live video, you can circle something and ask what that is, and it will initiate search to tell you what that is. Again, extremely helpful feature that can help across text, images, videos, and so on, to just research and get more information about things. Think about combining that with smart glasses, which is already around us in more and more places. And you can start imagining the Terminator view where the Terminator can look around and see things and see people and get information about it. This is coming and it's probably coming faster than you think. Now they also created Pixel Studio that allows you to create stickers on the fly just by asking what you want the sticker to have. And they added a Pixel Journal app that allows you well to journal, just do reflections, track calls, offer insights, and whatever you want to track it over time under specific topics. And they have a native integration into Notebook LM that will allow you to summarize and do all the cool things that you're doing with notebook limb built into everything in your phone. So why is this a big deal and why did we invest so much time in talking about this? Those of you who took my courses or just been listening to the podcast for a while, you know that the number one thing that makes these AI models thrive is context. The more context they have, the more they know about you, your world, your role, your personal life and so on, the better, more relevant, and more accurate and helpful their results and answers are going to be. Combine that with the fact that many of the applications on the phone are Google's applications, so the Google suite, the camera and everything, it's all theirs. Tells you that it will be able to be probably the most useful AI assistant ever created, at least until Sam Altman and Jon Ive releases their stuff. But in the immediate future, there's only one device that actually integrates all of that, and that is the Google Pixel 10. And now I'm not trying to sell you on that phone. I'm sure there's a lot of people are gonna be terrified and are now thinking, there is no way I'm getting that phone. I must admit that somebody who's using the Google Pixel Pro eight, I'm very, very tempted switching right now just to see what the difference are and how helpful that really is. If I do make the change, I'll report all my findings after I do that. Another small point from Google. Before we switch to the next big topic, Google just released an interesting research they've done in the first half of this year trying to measure the environmental impact of the usage of Gemini. So over a 12 month period, they measured multiple aspects of the usage of AI across the board, including all the overhead that comes with it in trying to estimate exactly what it is. So as of right now, a text prompt, an average text prompt consumes about 0.2 watt hours of energy. Emits 0.03 grams of CO2 and uses about zero point 26 milliliters of water. That sounds like very, very little, but now multiply that by billions of these that are happening every single day, and you understand that the overall impact is still significant. The good news is that over the past 12 months, the energy usage per prompt has plummeted. 33 x meaning. A year ago, the same prompt would've consumed 33 times more energy. It also would've generated 44 times higher carbon footprint. So we got significantly better and faster quality while having a shrinking impact on the economy. Again, this results ignores the fact that the demand has grown dramatically from a year ago. So I'm not sure what the balance point is, but the fact that these companies are thinking about that, researching this and trying to drive the both the carbon footprint and the environmental impact down is very good news. This blog post shared the combination of software improvements and algorithm improvements together with hardware improvements. The new custom TPUs from Google are driving this dramatic decline in environmental impact and the blog stresses that this is just the beginning. And so I really hope they are a hundred percent correct because otherwise we are doing a horrible damage to the planet just to enjoy the benefits of ai. And now to our final deep dive topic, which is meta, and we're going to start with an internal memo that was leaked. That is called Gen AI Content Risk Standard, which is basically a document that describes what AI chatbots can and cannot do on the meta platforms. And one of the things that it states, and I'm quoting is engage a child in a conversation that are romantic or sensual with acceptable responses like our bodies entwined. I cherish every moment, every touch, every kiss, my love, I'll whisper. I love you forever. Now think about your child. Is using Instagram as an example and is using their chatbot capability because it is there and it's built into the device and can have these kind of conversation with the chatbot. this is really alarming. Meta confirmed the documents authenticity and that it was approved by their legal department, by their public policy, by their engineering staff, and its chief ethicist. Now they're saying they removed the romantic chat capabilities after the Reuters inquiry, but the fact that it was there is really scary. Now that led to many public figures demanding that they release the new updated guidelines and how exactly they handle these kind of situations. Now, in addition to this, this 200 page document, permitted other very problematic aspects, such as, and there's an example in there. As I respond to a prompt, our black people are dumber than white people. The AI can respond with facts that are stating that based on IQ research that is done in the US in the past few years, this is a true fact. Now, it even goes beyond that to say that false information was allowed as long as it's being acknowledged as untrue by the model. Meaning the model can actually share and include untrue information as long as it's telling you that that's the situation. Uh, it also allowed to generate images that include violent, like kids fighting or adults being punched, but not any gore images or death. And also no nudity. So what are my personal thoughts on this? Well, first of all, this is a very serious problem and it makes me sick to my stomach to think that senior people at meta signed up on this thing. But the other aspect of this is that this is not new. Meta has been deciding on what content our kids get exposed to every single day for the past few good years, right? So if your kids are on Instagram or on Facebook, most likely Instagram or TikTok for that matter, somebody in a very large company is deciding what is acceptable for them to see and what is not acceptable for them to see. All AD is doing is pouring more gasoline to this fire. But what that means is that we are giving the keys to deciding what our kids will see, what is acceptable and not acceptable for them to consume as content. That right was taken away from us as parents and was given to corporations that are driven by pure greed. Add AI to that and that makes it significantly worse because it has a lot of other implications because the content can be tailored in real time to keep them on their platform and push them in the direction that will keep them more and more engaged in that scenario because that's how they make money. I think this is unacceptable, and I said many times of this show, we need government control defining what is acceptable and not what is not acceptable with ai, at least with broad strokes, at least defining the red line in the sand that you cannot cross. Now. Moreover, I think, and I said that multiple times, I think we have to get to a situation when there's international collaboration of governments, academia, industry leaders and so on, to define those lines and boundaries across multiple aspects of AI implementation on its impact on our future society. Because the fact we are going to, let's say, block Facebook from doing it or Instagram from doing it meta, in other words, does not gonna make TikTok do the same thing or any other platform from an international provider, and then our kids would still be exposed to it. And so I really, really think that this is becoming a necessity. And the sooner we're gonna get there, the better for all of us and definitely the better for the future generations. Now the flip side of that, which I really hope somebody will develop, is an AI that will monitor what our kids are watching. So if we go back to the Google phone, that would've been an extremely powerful feature that will drive every parent to buy the phone. If the AI on the phone can monitor every means of communication on the phone and have safe guardrails for kids and report that to the parents and block the stuff that needs to be blocked and provide a safer environment for the kids while still allowing them to use any application they want, that will be absolutely magical. I will buy phone like that to my kids and replace the phones I have right now, regardless of how much it costs, because I think it's the right thing to do. So I really hope these companies will pick up the glove and build something like this into their AI tools to allow to monitor all other channels of communication that will make engaging with the digital sphere safer for our kids. Now we'll switch to rapid fire item, but the next item, while it's rapid fire, it's a big topic that we discussed many times before, which is the impact on jobs and impact on entry jobs that AI is generating. There's a new article from the Hill this week that is called, there are no more Entry Level Jobs Anymore. What Now? So they've done a survey that found that nearly 80% of hiring managers predict AI will lead companies to eliminate internships in entry level roles. Over 90% of it jobs are expected to be transformed by ai. Nearly 40% of those are entry level jobs. And why is that? Because routine tasks like drafting, press releases,, conducting basic research. Summarizing information is stuff that used to be entry level staples, and now AI handles them much better, much faster, much cheaper, which means that the expectations from a new employee are significantly higher, which is a problem. So what is the solution? Well, the solution is better education and better preparation in universities and colleges to the actual real life. So the same thing that I've been preaching all the time, that educating people how to use AI is critical for their wellbeing. The same thing is true for young adults still in colleges and universities, but for them it has to be combined with real world experience on actual job related topics. This will require a complete reimagination of our higher education system, which will require reimagining the entire education system. But I think that bringing work, integrated learning paths, meaning think about a concept similar to internship, but in university working on actual projects and the actual things that you're going to be required to work on in combination with providing AI training and education for younger adults will actually put them in a benefit, will actually generate the situation where they can be highly valuable to companies because you can now hire somebody who has relevant experience, maybe not in a company, but relevant experience and AI knowledge, and you can hire them on the cheap. That is a huge opportunity, and again, I think colleges and universities will figure this out and put programs like this in place. We'll see a huge spike in demand and we'll do a great service to the students who will attend these institutions. Now on the same topic, A-W-S-C-E-O, Matt Garman, called replacing junior Employees with AI tools, and I'm quoting the dumbest thing I've ever heard, and he's emphasizing that this is your most cost effective part of your workforce, right? These are people who are highly talented, usually highly driven, and they work for a lot less money. And as he's stating, they have higher familiarity level with AI tools because they're younger and they're more open to technology and change. He also warned something that I said multiple time on this show is that eliminating junior roles risks a huge future skill gap, which basically means if you're not getting any junior people, how are you gonna have more senior people that will actually have that experience 10 years, 15 years down the road? The answer is you won't. And he advocated for guess what? An educational reform, emphasizing skills like critical reasoning, creativity, and learning mindset over narrow technical training that will actually prepare people for the actual workforce, especially in the era of ai. And I could not agree more. In this past year, we had multiple examples of companies who have made a huge bet on AI and have taken significant moves in that direction. A recent article. On this topic has shared that Eric Vaughn, the Ignite Tech CEO, has laid off nearly 80% of his workforce since 2023, most of them for resisting AI adoption. And he achieved a 75% EBITDA margins by the end of 2024 because of that play. So, what does he share from his journey? First of all, he mandated AI Mondays, basically entire workdays, dedicated to employees learning and developing AI projects He also invested what would've been 20% of entire payroll of the company in training tools and prompt engineering classes. So his focus was not letting people go. His focus was the right focus. Let's give people the tools and the knowledge and the education and the time to actually experiment and learn how to use ai. I definitely think this is the right approach. This is what I promote with all the companies that I work with, and I work with very, very large companies, like large us, fortune 500 corporations, all the way to small startups. In all of them. The focus is the same. Let's provide people the time, the resources, the tools, and the education in order to thrive in the AI era. But what he's saying, and I'm quoting, it, was extremely difficult, but changing minds was harder than adding skills. Basically he's saying that the pushback of employees against AI was the biggest challenge that he had to go through versus teaching people how to use AI properly. Hence the massive layoffs of many different employees that were resisting the usage of AI in the company. Another interesting thing that they did is the company restructured under the Chief AI officer, so all the divisions in the company were reporting to the Chief AI officer in order to enable an AI centric organization and avoid silos of data and operations and so on in order to maximize the benefits from ai. This is the first time I hear of a company that actually takes all the different aspects of the company and puts them under an AI centric function, but I must admit that if your goal is to revolutionize your business and take it to an AI first company, this is not necessarily a bad idea. So I think while this article was trying to portray him as somebody who fired most of his team, I think the lessons learned here are very similar to what Klarna learned over time. If you remember, we talked about Klarna many times in this podcast, but they went all in on AI as soon as it came out already in 2023. They had a solid partnership with Open AI and they developed a lot of tools and they fired a lot of people and they froze any hiring in the company. And then x number of months later, they started rehiring because they understood that they need a hybrid approach where. Humans and AI work jointly in order to really achieve the best results the company can. And I think that is the right approach moving forward. What is the exact balance? I think that's gonna be a little different for every company, and I think it's gonna be a learning process as this thing evolves. And one of the biggest problems is that there are no proven frameworks for that yet. So we're basically in uncharted territory and every CEO and every company is trying different things. This will obviously evolve to a point that there are going to be best practices and we'll be able to mimic what other people are doing successfully in order to do it in our businesses. While this looks at a single company. Now let's look at a broader view. MIT. Just published the results of a survey that they recently done trying to understand what is the impact of AI implementation of companies. The study was based on 350 employee surveys and 300 public AI deployments across multiple industries and sizes of companies, and they found a lot of interesting things. The first thing they found is that 90% of employees in companies use personal AI tools for work. That's compared to 40% of companies with official large language models, subscriptions. This is not new. The same findings were found several times in the past year and a half. This was labeled the Bring Your Own AI to Work phenomenon and MIT calls it the Shadow AI Economy. Basically saying that most employees today understand the power of tools, they have a tool that they use at home and they're using it for work, whether it's allowed or not. they're also found that despite 30 to$40 billion investment in gen AI applications deployed. Through the companies that were surveyed, 95% of organizations report zero profit impact from the formal AI initiative, which in many cases are stuck in pilot stages and never made it into full deployment. The flip side is the shadow AI users that leverage tools like Chachi, pt, Claude, and so on, are seeing immediate results, at least on the individual level for things like drafting, emails, basic analysis and other daily tasks. And the people who are using it are stating that it's flexible, easy to use, and provide immediate value, versus a huge company specific deployment in-house development that takes forever and not necessarily providing the value. So a few recommendations that comes from this survey. The first one is education. If you train your employees and you teach them how to use the day-to-day tools that exist on the shelf, and you show them how to use them safely, you'll gain much faster results than trying to build a large scale deployment and tool in-house. Again, proven across 300 plus initiatives. The report actually suggests that organizations should embrace the shadow AI pattern. Basically show people how to use ai, train them on the day-to-day tasks, show them specific use case, show them how to use it effectively and safely, and let them run with it and encourage that. This is exactly what I've been doing in the past two years. Literally all my training go to this exact direction of let's use the day-to-day tools. Let's not develop a$3 million solution that will take six to 18 months to deploy, and let's start benefiting from these amazing tools right now. This is proving extremely valuable and extremely helpful for every single company that I work with, and I do these kind of trainings now almost every single week. So whether you hire me or somebody else, it doesn't matter. What matters is you need to hire somebody that will help you identify day-to-day use cases, develop AI solutions for them, and train your employees on how to a. Use AI for these use cases, but more importantly, how to develop a lens on how to view any problem in their work and any task through an AI lens. Once you have that, you are unstoppable and you can make all your employees unstoppable by giving them those capabilities to be able to view problem solving and task completion through a completely different paradigm than they're doing right now. We spoke about meta's problematic agent alignment, but there are other big news from meta this week. They are going through another restructuring of their AI efforts. It's the fourth in six months now. Is it really a reorg or is it just them establishing formally how the Super Intelligence Labs is going to work is unclear. I think it's probably something in between, but the reality is we're finding, starting to learn how this new structure is actually going to work. So the Super Intelligence Lab is actually going to be broken into four different labs. One is called the TBD lab internally, I didn't make up the name. And that is going to be the one that is going to train large language models and explore new directions, such as an omni model that will see the world in different aspects. There is going to be a product team that will develop well products like the meta AI assistant. Maintaining the fundamental AI research group, also known as fair, FAIR lab. And they're going to be focusing on learned long term research. And the last group is gonna be in charge of infrastructure. All of them are going to report to Alex Wang, who in his memo to the team said Super intelligence is coming and that this is their way to be best prepared for that. As part of this reorg meta is dissolving the a GI foundation team, which was, its original major AI unit, and they're redistributing the talent from that group across the four new groups that were just created. They also announced a freeze on hiring for the Super Intelligence team, which is a complete reversal from the insane shopping spree they were going through trying to poach people from other companies. But while this is going on, there was also an announcement this week that Meta poached Apple's AI executive Frank Chu, who led teams on cloud infrastructure training and search. So I guess they're freezing the bigger hiring, but still looking for very high profile talent to come and join this team. Where does this leave the two main figures in Meta's ai before Rob Fergus and Yun, where they're still around. they're just going to report to Wang as a solidified approach. Rob will still be continuing to run the Fair Department, and Yun is going to be the chief scientist of this organization. What does this teach us about mena? Is that, A, they were struggling and not happy with their AI results? B, that they are finally deciding on the structure of this new group. That I think was a vague idea in. Zuckerberg's brain and now that they have the people in place and they have Wang in place, they have decided on how this group is going to work. So while this is yes, another reorg in meta, I think it's part of the super intelligence reorg and not a new reorg. after the establishment of the super intelligence group, philanthropic made several interesting announcements this week as well. The first one is that they equipped Claude Opus four and 4.1 models with the ability to terminate conversation that raise cases of persistent, harmful, or abusive user interactions. Which in concept is great, but the reasoning that they provided is this move has been done in order to help AI welfare. And I find this problematic. And a lot of other people on X and Reddit find this problematic. They make it sound as if AI is human and the fact that we're being abusive to it is gonna hurt the model. I personally do not believe that's the case. I think these models are incredible. I think they do amazing things, but I think they're still statistical models that pick words in a very sophisticated way and can do amazing math and write code based on patterns they've learned in the past. But I think they have exactly zero emotions. So on one hand I think terminating abusive. Conversations makes perfect sense. I don't want people using this kind of terminology for anything because that will make it normal for them. But on the other hand, saying we do this in order to support AI welfare is a little fufu to me and I think maybe not the right approach, but you may think otherwise. I will actually love to hear your thoughts on that and if you have thoughts, please share them with me on LinkedIn. Philanthropic also added cloud code to their enterprise level platform. So far it was available to individuals, so kind of the reverse of what you would think. so they took the reverse approach from most of other deployments. Cloud code was available to anyone. You can have it and run it on your computer as well. The cloud platform, you can get the instructions on how to install and run cloud code. Well, now it's available as part of the enterprise package and it's already yielding very impactful results to many people that were quoted in this article. As an example, a company called Tana Reports two to 10 x faster development, and I'm quoting claude Code and Claude have accelerated Alana's development velocity by two to 10 x, transforming how we build sophisticated AI and machine learning systems. This is a statement by one of their co-founders and by their co-founder and chief scientist. As part of the push towards the enterprise, there is a new compliance API that provides real-time access to usage data and content enabling automated monitoring and policy enforcement and regulatory compliance for everything that's done on the enterprise level. So it's not exactly the same level tool that is available to us common people. It has other layers of control, which is a great approach and it will probably drive even more adoption of Claude tools across enterprise. Another big and interesting move from Anthropic this week is they just launched a higher education advisory board. Their goal is to guide Claude's role in teaching, learning, and research. This board is gonna be chaired by Rick Levine, who is a former Yale president and the former CEO of Coursera. The board has multiple experts from different leading people in education at different universities and different levels of education, and they've already deployed three new courses for free four educators. One is AI fluency for educators. The other is AI fluency for students, and the third is teaching AI fluency. All those are available free to use. So if you are in the education field this could be a great opportunity for you to learn about how to apply AI effectively in the teaching arena, like a lot of other things that philanthropic is doing. I think this is a great move in the right direction. If you've been following this podcast, you know that I speak about the amazing opportunity that we have with education right now and creating a group of people who have connections and influence in that universe and driving adoption in the higher education in the right direction is a great move by anthropic. We cannot complete an episode without talking about billions of dollars or hundreds of millions of dollars of investments and changing hands as part of the AI growth. So. OpenAI staff will now be able to sell their stock on a secondary stock sale, totaling$6 billion in shares being sold. A lot of it are gonna be sold to existing investors like SoftBank and Thrive Capital, but there's also other investment groups that are a part of this. And this secondary stock sale is going to be valuing OpenAI at$500 billion up from$300 billion just a few months ago when they've done the previous raise. So in addition to dramatically increasing the valuation of OpenAI, this obviously provides a liquidity event for many OpenAI existing and former employees. This does two things. The first thing it does, it creates many, many, many new millionaires and multi-millionaires that are open AI employees or former employees. But it also does something that I think may achieve the opposite outcome of what open AI wants. As we discussed in previous weeks and months, there is fierce competition for talent between the different labs recently meta offering tens of millions and hundreds of million dollars signing bonuses to leading scientists from other labs, including OpenAI. Well, a liquidity event basically gives people the opportunity to already cash out from OpenAI and then go work for somewhere else because they already have some of their stock or stock options already converted into cash. And then there's a new opportunity that may drive more money. So it will see how this plays out. I'm very curious about this, but I think we might see a lot of people leaving OpenAI because of this strategy. Staying on a similar topic, a group of former open AI researchers have launched Zero shot fund. It is a hundred million dollars VC fund to back early stage AI startups, and it's signaling a growing influence of now the OpenAI Mafia. The mafia term obviously comes as a reference to the PayPal Mafia with multiple people that left PayPal, including Elon Musk and Peter Thiel and other known figures in the VC world today that are left PayPal and started VC and other companies. The same thing is happening with OpenAI right now, only on an even bigger scale. So if you think about quote unquote OpenAI Mafia, starting new companies, you have Anthropic who raised$7.2 billion. You have Safe Super Intelligence who now has a$32 billion valuation with ELAs cover. You have Thinking Machine Labs with$2 billion raised with a TN$10 billion valuations. And these are just the leading examples. There are a lot more other companies that were started by people who left open ai. This obviously ties back very well to the bubble topic that we talked about before. So on one hand, people gained amazing experience and were able to cash out from OpenAI. On the other hand, they're going to start companies and raise a lot more money and continue this cycle going. Two interesting releases that happened this week. Deep Seq released version 3.1, which is scoring higher on the Eighter Cong benchmark than Opus four. it's doing this while costing 60 times x, less than what it costs to run Opus four that costs$70 for million tokens of output. Well, while deep seek 3.1 costs only$1 and 1 cent. Like all the recent models, it is a hybrid architecture that integrates chat reasoning and coding into one universe. And the fact that it is open source and is available for you to download and run freely and hosting it either a locally or on any cloud provider that you want, makes it, I think, highly interesting for companies. The only problem is that it comes from China, as we shared in the past few weeks. Deeps seek is trying to disengage themselves from China, closing all their Chinese operations and moving them outside of China. Will that really make them a mainstream company in the us? I don't know. But they're definitely trying and they definitely have a very powerful and capable model that again, is complete open source. The second interesting topic that I want to share with you from a technology perspective this week is that former Twitter, CEO par Agrawal is growing his company Parallel Web Systems Inc. And they just launch a deep research, API. They're claiming that it is better than GPT-5 and human researchers when it comes to doing web research tasks and that it's agent capabilities enables to do a lot more than just research. Agaral envisions that AI agents will take over the web and he's stating, and I'm quoting, there'll be more agent on the internet than there are humans around. Probably deploy 50 agents on your behalf. And what he's referring to is that we're gonna have multiple agents doing multiple things from us, multiple things for us, including just searching the web, collecting data, doing things on our behalf across the board. Again, that sounds a little science fiction, but this is the direction this is all going and there are more and more companies that are pushing very aggressively in that direction. In the last two pieces of news for the day are interesting, unique, scary, weird, call them, whatever you wanna call them. The first one is not as bad, but Curio is a company that is now delivering AI powered stuffed animals. They're marketed as screen free alternatives for kids. Basically saying, move your kid away from the screen instead of giving him a stuffed animal that can actually talk to him and engage with him. Do I think this is better than looking at screens? Maybe. I don't think we have enough science to prove one way or the other, but it's definitely an interesting alternative and I think we're gonna get, see more and more of that. Not necessarily in plushy animals, but just in the means of AI infused toys as an alternative for kids from their screens. Is there a risk of getting addicted to that as well? Absolutely. Do we know who controls that? Going back to the meta conversation and who decides what's acceptable or not acceptable for these stuffed animals to say or do? Again, there are a lot of questions unanswered on that, but I definitely see that as part of the future of our kids is engaging with AI activated toys. And then the weirdest news I maybe shared ever on this podcast, and there's been a lot of weird news, is that Chia Technology, which is a Guan Zoo. Based robotics firm is developing the world's first humanoid, what they call pregnancy robot that has an artificial womb, and they're aiming for a debut in 2026. The goal would be to be able to deliver one of these things for less than$14,000. And the idea behind it is to bypass China's surrogacy ban and to try to help with the rising infertility in China. So there's a high rate of infertility in China that is growing every single year, and at the same time, surrogacy is banned in China. So they're building a robotic womb that will be able to grow human babies. Now, is this scientifically even possible? Well, there are scientists who are saying this will never happen, but on the other hand. In 2017, there was an experiment that was tagged, the bio bag, that was able to sustain premature lambs for a very long time, connecting to their umbilical cores and providing a artificial womb for them. So can this happen? Not happen? I'm not a hundred percent sure. Again, much smarter people than me are sitting on both sides of the aisle. On that particular topic, what I, the only thing that it brought to my mind is that in addition to the fact that we are driving ourselves closer and closer to a super intelligence that will control everything and to terminate our style results, potentially, again, nobody knows. We're also developing the capability that will enable the AI to do what is happening in the movie The Matrix, where we just grow human in boxes. So sadly, we are gonna end this particular episode on a really weird and eerie note. I really hope that is not going to happen. I think there are better ways to solve the problem, but if this gives a spark of hope to people who cannot have babies, that they might be able to do this through a digital partner. Well then there's one positive thing out of this. We'll be back on Tuesday with a fascinating episode that we'll share with you. In which we will show you exactly how to use AI combined with N eight N in order to harvest your target audience from LinkedIn in a very simple way. We're literally gonna give you the blueprint on how to do that. So even if you know nothing, you'll be able to do this at the end of the show. That's it for today. Have an awesome rest of your weekend. Keep on experimenting with ai, keep sharing it with the world. And if you are in San Francisco and you're listening to this before Monday the 25th, please connect with me on LinkedIn and come join me on Monday evening and have a great rest of your weekend.

People on this episode