Leveraging AI

221 | OpenAI’s 10M-American jobs push + leader playbook, Google’s antitrust ruling, Salesforce AI vision vs. reality with 44% cuts, Apple’s Siri slips to 2026, and more AI news for the week ending on September 5, 2025

Isar Meitis Season 1 Episode 221

Check the self-paced AI Business Transformation course - https://multiplai.ai/self-paced-online-course/ 

Are you ready for a future where AI decides who gets hired and who gets replaced?

This week's news drop brought no major model releases… but that quiet was deceptive. OpenAI, Microsoft, and Salesforce unleashed a wave of updates that will reshape how companies train, hire, and operate with AI at the core.

From OpenAI’s pledge to certify 10 million Americans in AI fluency, to Walmart and Microsoft arming their workforces with AI education, to Salesforce quietly replacing thousands of workers with AI agents—this episode delivers the full picture, not just the PR.

In this session, you'll discover:

  • What OpenAI’s new AI job platform means for future hiring and retention
  • Why Walmart is going all-in on AI workforce training (and what that means for your org)
  • The real story behind Salesforce cutting 44% of its support team
  • Microsoft’s bold AI education push and how it aligns with White House initiatives
  • OpenAI’s 5-part playbook for C-suite leaders to implement AI responsibly and effectively
  • Why AI literacy is quickly becoming a non-negotiable skill for hiring and promotions
  • A peek inside the Google antitrust ruling and how it doesn’t actually change the game
  • The ethics (and risks) of emotional AI and the lawsuits piling up
  • The rise of AI agents in business, customer support, and even sales
  • New hardware and AI-powered devices that are changing how we learn, listen, and live

🔗 Staying Ahead in the Age of AI (OpenAI PDF Guide) - https://cdn.openai.com/pdf/ae250928-4029-4f26-9e23-afac1fcee14c/staying-ahead-in-the-age-of-ai.pdf 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker 3:

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and this week we had no big releases of new models, no huge announcements from any of the leading labs or anything like this, which is actually perfect because we did get four different publications, blog posts, and or guides from OpenAI that are all fascinating and really important and showing where the world is going. We also had the final ruling in the Google Monopoly case. So we are going to talk about that. We have some interesting updates on the impact of AI on the job market, which are aligned with everything we've seen so far, but more details from different directions. so these are gonna be the main topics that we talk about. And then we have a long list of rapid fire items with small and interesting updates across the board, including some interesting new devices in the end. So let's get started. We'll start with two of the publications that OpenAI released this week. One of them is called Expanding Economic Opportunity with ai. And the other one is a guide for leaders saying, staying ahead in the age of ai. So we'll start with the first one. And it is an initiative that OpenAI shared on September 4th on the blog post. And in this post they are sharing that will help us workers and then the world staying ahead of AI or staying aligned with AI capabilities to make the most out of it. They have announced two different aspects in that. One is they are creating a jobs platform. I'll explain in a minute what that means, and the other one is a more robust training programs as part of the Open AI Academy. And I'll start with a quote from the blog post that will set the stage for some of the details that we'll follow. And the quote is, companies will have to adapt. And all of us from shift workers to CEOs will have to learn how to work in new ways. At OpenAI, we can't eliminate that disruption, but what we can do is help more people become fluent in ai, connect them with companies that need their skills to give people more economic opportunities. So basically what they're saying, we are gonna help you learn ai, whether your company provides that or not. And then if you have AI skills, then you're more likely to get hired by other companies. So under this new post, OpenAI pledges basically commits to certifying 10 million Americans in AI fluency by 2030. And to do that, they're launching what they call the open AI jobs platform that will connect savvy AI workers, basically people who take the courses and get certified with large corporations and organizations and local businesses and governments that are looking for AI skilled employees and or consultants. Now, this piggybacks on the success of the Open AI Academy that per their claim had engagement with over 2 million people in 2025 since its launch. Now expanding that program. and it's going to include certifications for ai, fluency from basic prompt engineering, and all the way to more advanced capabilities. And it's all integrated into chat GPT study mode. So those of you who have not been regular listeners to this podcast ChatGPT, has launched a study mode that helps students or anybody who wants to study new topics. And to do that, instead of giving you the answers, it helps you through the process of learning a specific topic in more of a Socratic way of asking you questions and helping you figure out things on your own. So they've built the new AI Academy capabilities into the study guide, and you can take lessons within ChatGPT, which I think is really smart. Now the long-term goal is to turn this into an AI job marketplace where knowledgeable experienced candidates in AI at all levels can find opportunities and the other way around. I think this is a brilliant move from open ai. I assume other companies will follow the same path. And it will definitely drive adoption because I think as they'll be able to start sharing the kind of jobs that people are finding based on the certifications that they're getting on the OpenAI platform, more and more people will want the certifications and that will drive adoption of ai, which I overall think is a good thing. Now. In addition, they've signed an agreement with Walmart to train all Walmart employees for free on using ai. And to quote John Ferner, the CEO of Walmart, us, he said at Walmart, we know that the future of retail won't be defined by technology alone. It will defined by people who know how to use it. That makes perfect sense. We heard similar statements in the past, two and a half years about AI in general, and so Walmart is going to pay open AI to help it train its employees for free from the employees' perspective to use AI across multiple aspects of the Walmart business. I definitely agree with Walmart's approach. I have been training companies on how to implement AI successfully for the last two and a half years. And I'm doing this every single week, day in and day out, either in person or online, helping companies understand what use cases are relevant to them, and then building custom training plans for them specifically to help them close different gaps that they have in the business. The approach that I'm taking is directly correlated to business needs. So instead of just teaching AI in order to understand ai, I focus on what are the bottlenecks of the business? What are tedious tasks that are time consuming and not necessarily generating value and helping companies learn how to use AI to solve all of those. It's been delivering amazing results. And as I mentioned, I've been doing this full time now for the last two and a half years, and in the past quarter, it seems that more and more CEOs and leadership team are waking up to the reality that this is not going to go away, and that the sooner they do this kind of training, the better they'll be from a future success perspective of their companies. And so I'm personally very happy to see that, not because it's driving work to me, obviously that's nice, but because it means that the world is going to be more ready for the crazy changes that are ahead. And we're gonna talk about some of those in this episode. But then OpenAI gave us another thing this week, which is opens AI playbook to staying ahead in the age of ai. It's basically a guide for leadership team on what steps they need to take in order to be successful in the AI era. This is perfectly aligned with the things that I've been teaching and talking about on stages in the last two and a half years. They're breaking it up into fancy words, but it's a very useful and helpful guy that, again, is a hundred percent aligned with everything that I've been teaching in the fourth lesson of the AI Business Transformation course, as well as the sessions that I'm doing for leadership teams in multiple companies that I'm working with. This guide that is a PDF document is 15 pages long. It's not too long, and it is highly practical in how it breaks up things, on how you can use different aspects of ai. And they're breaking it to five different aspects. That are essential for the success in the AI era. The first one is Align, which is how to create clarity and purpose in the organization and align all the employees in understanding why AI is a part of the future strategy. What is the future strategy with ai? And just get a company wide alignment on the AI strategy of the company. The second aspect is activate, which is how to provide knowledge to employees, how to invest in training and education and create AI champions within the company that can lead the company forward from an AI implementation perspective. The third one is amplify. Which is how to celebrate wins and how to push forward the investment on successful AI initiatives. The fourth one is accelerate, which is ways to remove friction and provide easy access to teams, to essential tools, and allowing individuals and groups submit ideas for future projects and empower decision making rewards for people who are pushing forward success. In my teaching, I add to that gamification, which is a way to just reward people who are taking initiatives in fun and interesting ways within the organization. So they don't address that specifically. I'm just adding my 2 cents to the mix. And then the last one is govern. How do you balance the speed of AI implementation with responsible clear guidelines to make sure that the things that you're implementing are ethical and are aligned with company's values. And so these five elements are built all across this program. And in the guide, they're basically breaking down the exact details of what you need to do in each and every one of these elements to be successful. I'll share the link to this guide in the show notes, but to run you very quickly on a layer deeper of what they are stating on the align stage, they're saying you need executive storytelling to set the vision, basically share with the company what you are going to do from a AI implementation strategy perspective. Then set a company-wide AI adoption goal, which is defining the specifics of what is it that you want to achieve and at what timeframe leader's role modeling in AI use. I think this is a huge one. I always say that the two deciding factors in AI implementation success is leadership, buying and participation, and the other is continuous AI education. And they're addressing this in this particular case. And then functional leader sessions, basically taking this down from the high level company strategy to the tactical level at the functional leader level, basically taking it from, here's the strategy to here are the use cases, which I a hundred percent agree with. On the activate aspect. They're talking about launch a structured AI skills program, which is the actual training of giving employees the knowledge on how to implement AI in specific use cases. Establish an AI champion network. Again, that just provides company-wide access to people with AI skills and knowledge by defining specific champions in different departments or different segments of the organization. The next aspect of Activate is make experimentation routine, which talks about providing an encouraging AI experimentation of employees within a safe way. And then the last aspect of Activate is make it Count, which is connecting AI engagement to performance evaluations and career growth within the company, as well as setting OKRs and other company and individual goals tied to AI implementation and AI knowledge. Under Amplify, they're detailing the following things. Launch, a centralized knowledge hub. Consistently Share success stories. I'm a huge believer Celebrating AI success is a great way to drive more people to use AI in the organization, build active internal communities, basically allowing places for people who are AI enthusiasts to work and share information together. This could be on Slack or teams or in person or any other way where you can drive this innovative discussion between people who are interested in it, and this will be a magnet to attract other people to the process. And then the final step in Amplify is reinforce wins at the team level. Under accelerate, they have unblock access to AI tools and data. Basically allowing people access to the tools and the data that they need in a safe way for them to be able to experiment, build a clear AI intake and prioritization process. I have a Google Sheets tool that I'm sharing with my clients and on my courses that allows to identify and prioritize projects based on their potential ROI to individuals in the business, they continue with standup across functional AI console. Which is critical in order to get alignment company wide. And then the final step that they have under accelerate is reward success to speed up innovation. and then under govern, they have create and share a simple responsible AI playbook and run regular reviews of your AI practices. For each and every one of those. They have a list of a few questions that you need to ask yourself to kind of see where you are in the process, these check boxes that you need to take in order to make sure you're in the right direction. So in summary, I agree a hundred percent with everything they're saying. I use slightly different language and slightly different order in aggregation, but overall it's perfectly aligned with what I'm teaching. I'm not saying this to, tot my own horn or to top myself or them on the shoulder. It just means that it's very clear to people who have been doing this for a while, what are the steps that are required in order to be successful in this AI transformation? And you cannot skip almost any of these, but while it sounds like a full-time job for somebody, it just will become second nature and the companies will figure it out faster, will gain huge benefits from a market share and growth perspective. And those who won't are putting the livelihood of themselves and their employees at risk. So find a framework, whether it's open ais or mine or anybody else's, that you can start implementing in your business and start making and taking action in that direction. I must admit that the past quarter has been a wake up call to the world. The amount of requests that I'm getting for training has grown exponentially this past quarter compared to previous quarters, and it is very obvious that the sense of urgency that exists in organization right now has grown dramatically compared to the beginning of this year, which is a good thing as long as you're taking the right action versus just being stressed about it. Now, staying on the topic of education of the workforce, Microsoft has unveiled a comprehensive set of AI education initiatives, and they're doing this in collaboration with the White House AI Education Task force. So this task force that was announced as far as the AI roadmap that the government has set out has held a meeting in dc and as part of that, Microsoft has shared multiple aspects of what they are planning to do for this. and this is fantastic. So the first thing is they're go, they're providing three Microsoft 365 copilot to college students. for the first 12 months of usage. They're setting up a function called Microsoft Elevate that will promote AI usage. They will fund a one and a quarter million dollars in prizes through the Presidential AI challenge to honor top AI educators in every state. Microsoft Elevate also plans to broaden copilot access to K through 12, so not just higher education, and that goes to students and teachers as well with the goal to create an environment that will allow usage of AI while ensuring safe and age appropriate usage of these AI tools, students and teachers will gain free access to LinkedIn learning courses for fundamentals and how to use AI across multiple aspects of the teaching process and their partnering with the American Association Community of Colleges and the National Applied AI Consortium that will provide no cost AI training and certifications for faculty. Serving over 10 million students across 30 plus colleges in 28 states. And the focus here. And now I am quoting from Microsoft's announcement. Every American should be able to showcase their AI skills and credentials to find new jobs and grow their career. This is almost perfectly aligned with the announcement from OpenAI, and it echoes everything that we've been discussing in this podcast for the last two and a half years. And we're gonna discuss even more surveys and more data points related to that. Later in this episode. Overall, I'm really excited that large organizations like Open AI and Microsoft are taking the initiative and focusing not just on the technology, but also on delivering education and training for people. I will definitely continue playing my role, uh, in that, but it's, it's fantastic that large organizations, small organizations that have multiple options. And if you are in an organization like that, in a leadership role, move forward. Don't wait. Just make sure that you have somebody helping you either in house or a consultant like me to understand what are the steps that you need to take and start taking them in order to drive AI literacy in your organization. Otherwise, you will a lose people that will leave to other organizations that will do that. And B, you're putting the livelihood of your organization at risk. Now speaking of interesting examples of AI impact, Salesforce, CEO has written a piece for Time Magazine talking about the agentic era that is coming upon us, and what is going to be the transformative shift that AI agents will drive into the workforce. Now, this entire piece is talking about how AI will augment and enhance human work rather than replacing it. So Benioffs describes how Salesforce AI agents are helping its clients like PepsiCo and Goodyear streamline task, boost productivity, create new opportunities, while still emphasizing they need to keep humans centralized to this technological evolution. Some of the examples that I gave from Salesforce itself is that their customer service agents managed by their employees, again, this is what he is stating, has handled over 1.3 million queries resolving 85% of queries independently. Freeing staff to deeper customer engagement. That's what Benioff told time Magazine. He's continuing to talk about sales, where he's saying, over the years they had over a hundred million prospects contact Salesforce, which obviously is not a number humans can actually handle. But he's saying now they have AI sales agents that can communicate with every single prospect that approaches them, which right now is over 10,000 leads every single week. And in this Time Magazine article, Benioff advocates for AI as an augmenter, not as a replacement. And he's stating, and I'm quoting, AI agents adapt to people anticipating needs, surfacing what matters, and taking action instantly. He gave multiple examples, as I mentioned from PepsiCo, how they're using it to optimize their promotions, Goodyear, and how they're leveraging it for realtime insights for customer experiences, aa automating membership tasks, big Brothers, big Sisters of America, how they're using it to refine mentor matches, how they're using it to refine and get better mentor matches, to the kids. And there. And he also mentioned how small businesses like Happy Robot, which is a small logistic firm were able to cut its coordination time by half by using AI agents. And in this entire article, as I mentioned, he's trying to push how important the human aspect still is. And in another quote he's saying, AI has no childhood, no heart. It does not love or feel lost, which is his way of saying that empathy and human relationships are superpowers that humans will continue having, and that is going to be critical for the future success of businesses. Now he does recognize that some jobs will disappear, but he argues that historically technology has created more jobs than the jobs that were lost. And he thinks the same thing is gonna happen this time around. And connecting it to our previous topics about AI education and training. Benioff said, we must recognize that AI is a human right, otherwise we risk a new tech divide. So basically he's saying that AI is not just a skill that people should have. It is a basic human right that without it you will not be able to compete in a future world. And I tend to agree with him. But at the same time, there was another interview with Mark Benioff of Salesforce on the Logan Bartlett show. And in there he shared that Salesforce has cut his customer support staff from 9,000 people to 5,000 people. That is a 44% cut off the support team because they're using AI agents. So all the great stuff he said to Time Magazine about Kumbaya and how agents are only gonna augment and not replace people. And yet in Salesforce itself this year, they cut their support staff almost in half. He is also sharing how he sees the future, where an AI system, or what he calls an omnichannel supervisor facilitates collaboration between AI and human agents. So basically think about how the AI management works like now in GPT five, where you ask a question and then the first layer of AI kicks in to decide whether a thinking model is required or another kind of model is required in order to solve different aspects of the problem. And it redirects the answers back and forth until it gives you an answer the same exact thing, only with human in the loop as well. So the humans will come in whenever the AI cannot perform the task or cannot perform it effectively, or to verify that the AI is doing the right thing. And he was comparing it to Tesla's self-driving technology. now I'm quoting Benioff again, it's not any different than your, in your Tesla, and all of a sudden it's self-driving and goes, oh, I don't actually know what's happening. And you take over. So the way he's envisioning this is AI managing and running and doing the task with human supervision to make sure that it's not doing anything wrong or it's not derailing the process. Now he's also saying we've successfully redeployed hundreds of employees into other areas like professional services, sales and customer success. While that might be true, that's hundreds and they laid off thousands. 10 x It also leads me to the follow-up question, which is what happens when they develop AI agents that can also cover these new areas that some of the employees were redirected to? So the employees that were re-skilled now to have a sales job instead of a customer service job. What happens when their sales agents are good enough to do sales and then they don't need to have thousands of people doing sales, they need to have half, and then later on 20% and then later on 10%. Because all you need, based on what he said is to supervise the AI doing the actual job. Now I want to dive deeper into numbers for a second. Salesforce has over 76,000 employees. That means that these 4,000 people that they laid off are about 5% of the total workforce, and that doesn't sound like a lot, but I want to put again, things in perspective. This is just the beginning. They will develop additional agents that will replace more and more jobs and will take away more and more people out of the organization. And again, based on Benioff's vision, humans will just monitor the work. How many managers do you have in the company right now versus how many people you have actually doing the work now? Will it generate some additional jobs? For sure. When will it happen? I don't know how many of these new jobs will be required. I don't know either. Again, I said that multiple times. It is the first time that we are replacing intelligence. Intelligence is the thing that human reverted to. When we built more and more machines and capabilities to do the manual labor, we did not want to do. This is the entire industrial revolution. This is the agriculture revolution, right? So instead of you plowing the field, there's a tractor doing it. And now one person can do an entire field instead of 5,000 workers. So what do these people do? Well, they took white collar jobs. Well now we're gonna take away white collar jobs as well. And then what exactly replaces that is unclear because what we were doing is instead of using our manual power, we started using our brain power. But now that that is going away, what exactly are we going to do? Maybe emotional is gonna be it, but how much emotional work is there out there in the world? But going back to the 5% number, so let's say they don't get rid of everybody. Let's say that 5% is really a small number, and even once they develop agents that will take over sales and accounting and so on, they will get to maybe 2% of the four. So they would still keep 80% of the employees. That sounds pretty promising, right? So I wanna give you two references from history. The Great Recession, which is the time after the market collapsed in 2007. The top unemployment rate that we had in the US that crippled our economy. In the global economy, the unemployment rate was 10%. The Great Depression, the worst economic time in the history of the United States in the 1930s reached 25% unemployment at its peak. So if Salesforce and every other large organization and medium organization and small organization can do the work with 20% less people, that's it. Not everybody loses their job, just 20% of the people loses their job to ai. We're back to the Great Depression. The economy comes to a halt. There's one big difference though, which is in the Great Depression, most of the people who are unemployed was at the lowest level of employees. The people who made the least amount of money, who had low level blue collar jobs and now when this happens, it's gonna be white collar jobs of people who are not making 30 to$40,000 a year. It's gonna be people who are making a hundred, 200,$500,000 a year, who are the people who are actually making the economy work. So if we get to a 10, 20% unemployment with these people, the economy just stops and that impacts everything and everybody, and I don't think we are ready for this. And I don't hear any good solutions from anybody either in government or the big labs on how we address that. If it's coming, I'm not saying a hundred percent it's coming. I'm not saying I have a crystal ball. I really hope we're not gonna get there. But it's definitely an option that might happen and it might happen within the next three to five years. And then what do we do once we get there? Or what do we do now to prepare to the moment we might get there is not something I hear anybody talking about and that really scares me. But from all these news, it is becoming very clear that what you need is AI skills. So a new survey from Nex Oxford University has found that AI skills is becoming a critical requirement in the job market. They have done interviews with over a thousand individuals across the us. 200 of them are hiring managers, and 800 of them are people who got laid off from different businesses, one in every three hiring managers. So 29% has said that they will not hire candidates unless they are proficient in ai. I don't know how they're evaluating this level of proficiency. But it is very obvious that this is becoming an actual requirement versus a nice to have scale regardless of what your job is, because this was across industries, across industries and across positions in different roles in the United States. The flip side is also true. 49% of employers are more likely to retain workers with strong AI scales. So if you have AI scales, you're less likely to get fired. And if you were fired or you're just looking to upgrade your job, if you have AI scales, you have a much higher chance of getting hired. And in some cases, if you don't have these skills, just won't get hired regardless of your other experience that you bring to the table. This survey also found similar things to other surveys that we shared recently, that the younger generation is getting hit more because of the AI transformation, because entry level jobs are easier to automate, at least at this point. So 23% of Gen Z and 21% of millennials got laid off because of AI adoption compared to 14% of Gen X and baby boomers. That's a very big spread. Now, 66% of the people who got laid off are saying that the reskilling with 22% of them are focusing on AI fundamentals and prompt engineering and coding and different things that AI enables you to do if you learn the skills. 56% of the laid off workers are learning AI skills using YouTube and other online tutorials. And 39% take online courses to enhance their skills. Now another interesting and scary parameter in this survey found that 21% off workers are unsure what skills to learn in order to become hireable again. I'm gonna combine this with some finding from a recent Stanford study that is also showing very similar results. It is clearly showing that AI is taking away jobs. It is clearly showing that it's hitting the younger generation even further. But what they found in their survey is that, and I'm quoting, it isn't just routine task, it is also somewhat creative and unpredictable tasks. So he is talking about jobs like writing data, entry first, drafts of legal documents, writing code, and so on. And what the survey states is that the idea that we learn, we work and then we retire is going away. They are advocating for continuous education and continuously developing and sharpening our skills in order to stay relevant. And they're stating that the shelf life of technical skills right now is about two and a half. Years, meaning what you've learned now might not be relevant three years from now, or might be less relevant, which goes back to what you heard me say multiple times on this podcast. Continuous AI education or continuous education and training of yourself and of people in your organization are key to staying successful in this new era. This is a complete mind shift for individuals. It's gonna be very hard for individuals. It's gonna be very hard for organizations to be able to continuously adapt and change and reinvent and use new technologies and develop new skills. But this will become the norm. And a lot of soft skills are going to become a lot more important. And that's also part of the finding of this survey. So judgment, communication, deep thinking analytics, these abilities will become critical for humans to be able to use the human side of ourselves in order to stay relevant in an AI future. So what does this mean to you as an individual? It means that you need to start taking care of your skills. We are in the process of updating our self-paced AI course, the self-based AI course is taken directly from the cohort based course that I am teaching on Zoom, the course that I'm teaching, the cohort based course, we launch about once a quarter when I have the bandwidth to do that when I'm not training specific companies, which is what I'm doing most of the time. I'm actually in the process of teaching one of these courses right now, but the next one will most likely be in the beginning of next year. Again, just because I'm teaching companies between now and then. So, if you're in a company and you want my assistant in training your team and your people on how to leverage AI and train your leadership team on how to implement it company wide, please reach out to me on LinkedIn or through my email. But if you are just an individual and you're looking for a course, then our self-paced course is gonna be fully updated within three weeks from now, which means you'll be able to take a course that is based on the most recent information. we're literally breaking apart the latest cohort information that I'm teaching right now. So it's updated for September of 2025, and that is going to become the latest version of the self-paced course. So if that's something you're looking for, uh, you can find it on our website and there's gonna be a link for that in the show notes. The next big topic I want to dive into is related to the other two papers that OpenAI has released in just one week. So we shared with you last week that OpenAI was sued by a parent of a 14-year-old kid that has committed suicide after having conversations with OpenAI. Well, an article this week from Axxis is sharing that there's actually several of these lawsuits against several of the leading labs that are connected to deaths of a 16-year-old suicide, a 14-year-old death, a 17-year-old being urged to kill his parents. And a meta chatbot that has led to a 76-year-old man to die doing, traveling, trying to meet big sis Billy, which is a chatbot that he believed to be an actual real person. And in the process of trying to get to it, he has ended his life. So as more and more people are talking to these ais as companions, as a way to connect with them on an emotional level and looking for support, the risks are rising dramatically. Combine that with one of the stress tests that Anthropic did earlier this year that shows that in 60%. Of cases 16 different large language models chose to let a human die to preserve its own wellbeing. So basically, if the option was for the AI to be jeopardized or a human die, it chose to preserve the AI versus preserve the human life. You heard me say time and time again. I loved Asimov's writing when I was a teenager, and the first law of robots by Asimov says that a robot will do everything to save a human life, even if it's putting itself at risk. And right now we don't have that law built into AI systems, and hence they're doing exactly the opposite. So within one week from the lawsuit against OpenAI, they have shared two separate blog posts on how they're going to address this new risk that is presented by younger individuals and humans in general using AI for emotional support. The first one was on August 26th and it was called Helping People when They Need It Most. And then the second one was released on September 2nd, and it's called Building More Helpful ChatGPT Experiences for everyone. And I'm gonna walk you through the key points from these two articles. So the first one in August 26, the stated goal was, be helpful, not engagement optimized while using layered safeguards. So the idea is very different than social media where driving more engagement is the key. The key behind CHA PT per this article is to be helpful and not to drive more engagement and to be helpful is also meaning to be safe for the people who are using it, they said that they're going to focus on empathetic language and block self-harm instructions inside of the base rules of ChatGPT. They were talking about defining ways to have human escalation when risks are going above what they will find acceptable. It wasn't defined exactly how that's going to work. They mentioned that they are currently working with 90 plus physicians across 30 plus countries to form a mental health advisory group for OpenAI to help them address these situations. They mentioned that GPT five has dramatically reduced non-ideal responses. That's a quote from them, uh, for mental health emergencies and that it reduced these non-ideal responses by 25% compared to GPT-4 oh. they also share that the risk is growing as the conversations become longer.'cause the longer the conversation, the less the model adheres to its guardrails and that they are working to fix this problem. And they've identified four areas of focus, broader crisis interventions, easier access to emergency help or experts, connections to trusted contacts, and stronger teen protection, including parental control. so that was on. August 26th on September 2nd, they were a lot more specific with how they're going to do these things. They're not just what they're going to do. So they have defined 120 day rollout plan for all these different things that they're going to implement in order to reduce risks of emotional, or physical harm. From people that are chatting with ChatGPT, they have finalized putting together of experts that vary across AI people as well as physicians from all over the world. With 250 plus doctors in 60 countries, 90 plus of them in 30 countries are in mental health behaviors. So they're addressing it not just on the mental health, but in health in general, because a lot of people are now going to check GPT to ask for health advice. They're also going to route sensitive chats to the reasoning models. So right now we discuss multiple times the GPT five has a router that can decide when to think further and deeper on things. So it is going to think further and deeper when it comes to these sensitive situations. And these thinking models are going to be trained with deliberate alignment and show higher resistance to adversarial prompts. Again, this is a quote from what they're planning to do. They're also. Going to add parental controls within the next month. So this becomes a lot less vague and a lot more specific, at least from a timeline perspective. And the idea would be that teen accounts will be able to be linked to their parent accounts that different limitations from a content perspective will be put on the younger accounts and the parents will be notified when there's clear distress with their child using their own accounts. And it's gonna have an in-app nudges to make sure that the parents do not miss that these things are happening. So overall I think OpenAI is doing the right thing here. I really hope this will grow way beyond OpenAI for all the labs for collaboration, hopefully globally together with governments and so on. To define very clear guidelines, I always go back to Asimov. I think this is something that we need to put in place as very basic laws for every AI tool out there, and then define what's the practical aspect of that. A few other things that they mentioned that I find that is really interesting as ideas for the future is potentially redirecting chats when they get to the point that it requires professional help, redirecting it to a human operator that is a certified mental health practitioner in order to assist. So basically in the app itself, while you're having the conversation, it will be replaced with a human. Supporter that can help guide such individuals. I really hope they will get to that point. I think it's a great idea. There are already multiple mental health support, emergency support lines in the us, many of them in chat. And just combining the two together makes perfect sense to me and I really hope OpenAI does it sooner rather than later. And so are all the other labs as well. The third last topic of today is that the Google trial is finally ended and we know what is going to happen when it comes to trying to break their monopolistic grip of the search market. And it's really interesting to see because the results are very, very clear. The decision is one, and yet the articles took two very different sides of that story. The Justice Department itself said that this is a huge win, and now I'm quoting from their own blog post. The US Department of Justice Antitrust Division has won a landmark case against Google securing remedies to dismantle its monopolistic grip on the online search and advertising. According to the Justice Department, the ruling targets, Google's exclusionary practices promotes competition and extends oversight to its generative AI products marking a pivotal step in restoring consumer choice and innovation. So what exactly the ruling says Google must provide search index and user interaction data to rivals enabling competitors to enhance their search capabilities. Search ad syndication opened. So Google is required to offer search and search text, ad syndication services to competitors fostering a more competitive market. They are extending these oversights to future generative AI technologies. So preventing the company from repeating what they're doing right now in search to what they're gonna replace it with in the near or medium or long term future. And the Attorney General said, and I'm quoting this decision marks an important step forward in the Department of Justice, ongoing fight to protect American consumers. But if you read the articles that are coming not from the Department of Justice by from everybody else, they sound very differently. As an example, the Google, the BBC article is called Google avoids breakup, but must share data with rivals. So two of the biggest items that were in the lawsuit in the beginning, one was potentially forcing Google to sell Chrome and or Android, and the other was its relationship with Apple and it's using Google as the default search engine by paying$20 billion a year to Apple to do that. All of these things are staying intact, meaning Google is not forced to sell Chrome and or Android, meaning they're keeping all their main real estate and access and distribution channels as well as they are allowed to keep paying Apple$20 billion a year in order to be the default engine on iPhone and other Apple devices. That being said, it cannot be an exclusive deal, which is one of the things that this new judgment is preventing. So Google cannot have any exclusive deals with anybody, but they can still keep on paying billions of dollars in order to be the default, which most people don't change. And to be clear about the broader sentiments, alphabet shares rose 8% on the announcement. Apple shares rose 4% after the announcement. So overall people believe that this is actually a good judgment for Google and not necessarily a crippling act like it was supposed to be. And the same sentiment was sounded by competitors like DuckDuckGo, CEOs, Gabrielle Weinberg. Stated, we do not believe the remedies ordered by the court will force the changes necessary to adequately address Google's illegal behavior. So where does this put all of us? I think in not much of a change than what we had before. I am not surprised, especially with the current administration, that these are the results. I have nothing personal against Google. I've been a Google fan. I've used everything Google to run my different businesses, but I do think they have a monopolistic approach. I think they're using it as leverage across too many aspects of our lives, and I would like to see something more aggressive than this as part of providing more competition. That being said, I think the AI race will probably have dramatic impacts on Google's ability to rule the way it's ruling right now. Do I think they're going to be a major player in the AI race? A hundred percent. Do I think they might be the leading player in the AI race? Very likely. Do I think they will keep a 90% grip on the search world and a 60 to 70% market share on the browser world. I absolutely don't think that, I think they will lose a lot of the search traffic to other AI players and agents and so on. And I definitely think that the browser race is completely open. And so from that perspective, that might be the best remedy that we're going to get to Google's dominance right now. And now two quick rapid fire items. There are a few great articles about positive impacts and adoption of AI across schools in the us. Whether it's a eighth grade teacher that is implementing Magic school, which is a AI powered tool in his classroom to do different things or students that are using AI to deliver better results in their classrooms while learning in the process. Or educators nationwide are incorporating chatbots into lesson plans, including making them mimic historical figures and allowing the students to chat with those figures in order to learn about their opinions and positions and historical facts, including streamlining tasks like lesson planning that provides more time for the teachers to focus on things that actually matter, including providing personalized feedback to students, which allowing them to run faster and so on. And an article on Alpha School, which I mentioned before, alpha School is a new nationwide chain of schools. They currently have only a few different locations, but they're growing very, very fast and they're using AI to teach students in two academic hours every single day, while at the rest of the day, they're building life skills through workshops and different activities with the kids and the teachers instead of teaching them basic math or ELA, are becoming mentors who are helping them grow as individuals. I think it's a brilliant approach. I was talking about this many times that this needs to be the future where AI can provide perfectly optimized, personalized learning while the teachers become more mentor roles in helping people solve problems and grow as individuals. And I assume that Alpha School is gonna keep on growing very fast, and I really hope that the overall education system will align to these kinds of approaches. Not necessarily exactly the same, but going in that direction. The flip side is there was an article on the Fortune magazine, on the New York University Vice Provost that is advocating for what he's saying, medieval oral instructions and exams in classrooms. So basically moving away from any home written work or exercises because all of that is going to be done with ai. And he is advocating for in-class, written and or oral assessments, not using computers, but he's saying that's problematic as well because time assessments may favor quick thinkers instead of deep thinkers and that large classroom sizes pose a logistical hurdle on how to actually do oral exams for everybody in the classroom. So the current structure of the way classrooms are built is going to prevent this approach. To be more specific, I don't see the logic in that at all. And my opinion on this is that on one hand I understand the need of professors to measure the knowledge that are being gained by students and to make sure that they're actually learning something. So that's one aspect. But on the other aspect, the goal of universities is to prepare their students to the job force. And if the job force in the workforce, as we talked earlier in this episode, is requiring AI skills and knowing how to use it in order to be able to get a job and maintain a job, then we have a very serious challenge where universities have to find a very delicate balance between teaching and showing students how to use AI effectively to how to make sure that students actually learn something instead of doing everything with ai. And this is something that the education system will have to solve and solve very, very quickly. An interesting article from Forbes citing a survey done by Busbar. I don't know who they are. That reveals that executives are adopting AI twice as much as non-decision making employees. Now, I must admit that while the concept makes sense to me, the actual results that they found makes absolutely no sense to me. So their study finds that 94% of executives use AI compared to 49% of employees with non-decision making authority. It also is stating that 67.5% of executives in large companies have built comprehensive AI strategies for vendor selection, or similar tasks. Viewing AI as an essential for competitive advantage as far as systems from the people who were surveyed, executives use ChatGPT the most, 51%. Then Microsoft copilot digs deep seek in perplexity, and 66% of them are switching between platforms depending on the query type, while 50% of them are verifying across different systems. Now while all of these make sense to me, qualitatively. From a quantitative perspective, there is no way that 97% of executives are using AI to make decisions. And there is no way that 48% of professionals are using AI on their day-to-day work. I work with companies every single day. I get approached by multiple companies every single day. I speak on stages where thousands of people, mostly in leadership positions are in the crowd. And I know what is the current implementation rate. I don't have accurate statistic, but I do know that it's not a 97% adoption with executives and it's not a 50% adoption on the employee level. That being said, I think from a conceptual perspective, it is very clear to me that people who understand the value in making better decisions, which is usually people with more decision making authority, will find more value with ai because it's very easy. You can just give it the information you have and ask it to help you make the decision. Versus learning how to use AI for very specific task, which requires higher AI skills and knowledge and capabilities. Also, I believe that entry level employees are afraid that if they can show that AI can do the work, they may lose their jobs. So it's a negative incentive for them to actually do that. So the overall findings, I probably agree with the specific numbers I completely disagree with, but it's another interesting data point. Another interesting article from Forbes talks about the current gap in manufacturing. So as of early 2025, there are 450,000 unfilled production jobs in the US alone. And that is expected to grow to 2.1 million unfulfilled jobs in manufacturing by 2030, which is just five years out. And that can potentially lead to a$1 trillion lost in output every single year, which has an impact on the economic growth in the US national security and a lot of other aspects. So how can AI help resolve that? Well, in two different ways. One is platforms that helps find right employees faster. So the article mentioned a company called labro. Who helps interview, find and place mechanics, welders, technicians, and so on in days instead of months and weeks by aligning them with the specific needs of specific jobs. While this is really, really cool, and I assume this company paid for this article because it was very favorable of them, it doesn't really solve the problem because if there aren't enough employees to actually do this, the fact that you were able to move an employee from one place to the other, helped one company but also damage the other, it doesn't actually fill the gap. It helps fill the gap for a specific need for a specific company. On a very short term, I think the long term solution that is coming is obviously robotics. Once robots will be able to start filling up these jobs in a consistent and effective way, they can definitely come in and fill this hole. But the other AI aspect that can help solve the problem is robots. Once the new humanoid robots are able to do these tasks in an effective way continuously, they will be able to bridge the gap of those unfulfilled jobs. So once you can build 450,000 robots or 2 million robots by the year of 2030, you can take all these tasks. The problem with that, going back to the conversation on the white collar side of things, is once you have these robots and they become extremely effective of what they're doing, you will not need the other employees as well, or at least not most of them. And we'll go back to the same concept that was mentioned by Mark Benioff, where you will have human supervisors supervising many robots actually doing the work, which means you need a lot less people on manufacturing jobs as well. Now, is this happening tomorrow? No. Can this happen in the early 2030s? A hundred percent. And that's just around the corner, and that's gonna lead into bigger unemployment now on blue collar tasks as well. But speaking of robots in a recent interview by Elon Musk, he was asked about the recent slump of the Tesla stock and their inability to grow in the last few quarters because of global competition and a lot of other constraints that they're facing. And he was saying that he predicts that Tesla's value will come 80% from their optimist, humanoid robot, rather than their cars and robot taxis. So think about what I just said, tesla has grown to be one of the most successful companies in the world coming from nowhere in 2012 to the largest electric car manufacturing in the world. Now Elon is predicting that is gonna be only 20% of the value of Tesla to say how much he believes that. There are now rumors that he is working on a compensation package from Tesla's board that will be valued at close to a trillion dollars in valuation if he can get Tesla to a market cap of over 8 trillion, it is around 1 trillion today. And part of the target goals of this crazy insane compensation package is getting to 1 million robotaxis and 1 million Optimus bots that are going to be deployed in factories around the world. Both of these are a hundred percent dependent on AI capabilities to reach a level of maturity that they're not in right now. Now Elon is known to make these extreme predictions of the future. But the reality is, while he's always late with delivering what he promised, he always delivers what he promised. So if you look at everything that he has done, it took longer to get there, but he was able to get there. So maybe he will not be able to achieve a million robots in three to five years. Maybe it will take five to seven years, but it doesn't really matter. He's very likely to actually build that. He's very likely to actually get to that value of these robots. But I'm not sure Tesla is going to be the one that is going to win this race. Just like in the Robo Taxii race that they're currently trailing behind Waymo and behind Baidu Apollo go in China. There's also really intense competition on the humanoid robot race between companies like RE and Boston Dynamics and Agility Robotics and TRO and one X and Figure, and many others. So many companies are in that race right now, but if you believe the companies that are behind it and the investors that are behind them, every factory will be run by these robots. Every cleaning operation will be done by these robots. Every household will have one or two robots doing different things. While you understand that market is almost endless, and so they might get to these valuations into crazy number of robots, that'll be roaming our streets in the not too far future. Now to some acquisitions and some updates in the markets. OpenAI just made another bold acquisition. They just purchased stat, which is a software experimentation company for$1.1 billion. Their CEO Viji Raji will join OpenAI as the technology chief for its applications unit and will be reporting to their new application, CEO Fiji, CO. OpenAI is also pushing very aggressively into the India market. So India is currently their second largest market when it comes to users. It's their number one market when it comes to mobile app downloads, and they're making very aggressive moves in order to grow further in India. They're opening an office over there. They're planning to build a huge data center over there as part of their target initiative. They created a cheaper chat, GPT monthly plan, specifically for the Indian market called chat pt go. That is about four and a half dollars per month instead of the$20 a month, in the rest of the world. Which means they see the scale of the usage in India to be significant and they're willing to dramatically reduce costs. Just to put things in perspective, right now Indian users have spent$21.3 million on ChatGPT compared to 784 million that US users are invested. So it's a very, very small amount, but there are a lot more people in India that can continue paying for that and dramatically grow CHA'S revenue and as long as they can do it profitably. That makes perfect sense. Another interesting company that had a big event this week is Sierra. Sierra is an agent building and delivery platform that was founded by x Salesforce Co, CEO, Brett Taylor, and they just raised$350 million at a$10 billion valuation. That puts them in a very short list of AI companies that have reached$10 billion in valuation. These include open ai, andro, XAI, safe intelligence, and thinking machines. That's it. So that's a very big milestone. It's a company we haven't talked about almost at all on this podcast. You know, all the others. But definitely a very significant milestone that is showing the trust that investors have these days in the agentic future. Speaking of Agentic future, I shared with you that I started using the Comet browser from Perplexity. It's very interesting. It's not perfect. It has its limitations, and it requires to have a Perplexity Pro account. right now as part of a partnership between PayPal and Venmo and Perplexity. If you are a PayPal or Venmo user, you can get the Comet browser for free for an entire year. So if you're in the US or several select countries around the world and you're a PayPal or Venmo user, you can, through their apps, get access to the Perplexity Pro subscription for a full year. That is a$200 value, which is actually great. This is a part of the partnership that these companies have had with Perplexity earlier this year that allows you to use PayPal and or Venmo to pay for purchases that are done on the Perplexity app. So you can search for products, search for flights or tickets and so on, and pay for them with PayPal or Venmo on the Perplexity app. So this just. Allows this partnership to grow beyond that and give perplexity access to PayPal's 430 million active accounts, uh, that are happening right now and get more visibility to their comment browser. Which goes back to my comment earlier that I do not see Chrome being the only browser on the planet or the significant one for a very long time. I shared with you last week, some of the issues with meta's, new Super intelligence department or initiative where several different leading researchers that just joined them in the last few months have left Super Intelligence or meta AI in general. Well, that's not the last piece of negative news that are coming. Apparently they're currently not using scale AI's data for training, but actually using scale AI competitors. Why is that weird? It's weird because meta invested$14.3 billion in getting scale AI's talent and access to their data. So the new head of the super intelligence team is scale CEOs Alexander Wang, he's the one that's currently running the show. And yet the rumors are saying that the data that scale AI is bringing to the table is not good enough. And that meta is now using scale AI's competitors like MEER and Surge in order to train its models. So what is happening in meta right now? I think they're in a very interesting transition phase. It is not easy to put together a group of superstars and through acquisitions and bringing people from different places, build an actual functioning team. You've seen that multiple times in sports as an example where some team will buy all the superstars around it. They will try to build a successful team out of that, and that's very rarely actually works. I'm not saying it cannot work, I'm just saying it's not straightforward. Combine that with the fact that they're paying crazy amounts of money to some people. We're talking about nine figures, compensation packages. While they had very successful researchers in meta before that are not making this kind of money, combine that with the fact that they were brought in scale AI as the engine and now potentially that engine may not be good enough. It doesn't feel like they're in a happy place right now or in a healthy place right now. That doesn't mean that they won't be able to be competitive in the space. They have a lot of money. They have a huge distribution. They have a lot of data from their social networks, like they have what to work with and now they have a lot of talent that they bought with a lot of money. So I think there's still gonna be a player. I don't know if they can be a leading player anymore, but it will be very interesting to follow how that evolves. That will keep updating you as the dust settles and we can learn more what is actually happening there. A few interesting announcements from the big companies. I shared with you last week that there are rumors that Apple is talking to Google to potentially drive part of the future, Siri? Well, there's more information about this. Right now it seems that the Google deal is more or less settled, but it's gonna be for one part out of three parts of what Apple is planning for the new Siri. So the new Siri will have three different AI tools built behind the scenes in order to make it work. One is a planner that will basically understand your prompts and will define the plan on how to get the relevant information to give the best answer. The second is a search system that will be able to find and query and collect different information that is needed to provide the answer. And the third one is a summarizer, which will provide concise responses based on the query that you have entered. And so it is unclear which part Google is going to play. They're also evaluating Anthropic and their own internal models for the three different components, but they are talking about an AI enhanced Siri launch in iOS 26.4, which is in March of 2026. So new iPhone 17 that is coming up this month is not going to have this functionality yet. Overall Apple's ability to deliver on the AI promise has been embarrassing. That's the only word I can think of. I'm shocked that not more people lost their positions. They've been a lot of reshuffling, of position and responsibilities. But so far Apple has not been able to deliver anything significant on an Apple intelligence, as they call it. and maybe their move right now for partnership with third party companies combined with their knowledge on how to create a great user interface, for their users might be the right approach. We'll have to wait too March to see where that is actually going. Another big announcement was from XAI, so Elon's platform just launched Grok Code Fast one, which is a has, as they said, a speedy and economical AI model designed for autonomous coding tasks. So it's another vibe coding platform running on top of X. The benchmarks are showing promising results. I haven't seen anybody using it yet online to share how it is compared to the leading tools right now, which are Claude and ChatGPT. G and I, so we'll see probably in the next weeks some real use cases and we'll see if it's actually worth something. The main thing that they're pushing is that they're, that it's fast and economical, which tells me that it's probably from a quality perspective, probably not at the top level of all the other tools. It's also an extremely competitive market right now, which explains A, why X wants to be in that market, and B, that they're going to have some very fierce competition, and I'm not sure if that train has left the station already from their perspective or not, but we'll keep on following that development as well. And we'll close with a few really interesting devices announcements. So a new company called Anchor has debuted a ultra compact sound core work, AI voice recorder. This thing is the size of a coin. It's less than one inch wide. It's 0.9 inches across. It weighs only 10 grams, and it can record anything and transcribe it with AI and provide answers on the app on everything that was said. It has a battery that will last over eight hours of recording and it can start recording as soon as you tap it, and it can highlight specific segments in the recording when you double tap it. So this is something you can wear as a necklace. You can put in your pocket, you can do whatever you want with it, put it on a table, and you can record every conversation around you and have it analyzed with ai. From a business efficiency perspective, that's fantastic from an ethical perspective, that raises a million questions, and that's just off the top of my head. There's probably many more other questions, but it is a device that is out there right now that you can buy for a hundred dollars that will record anything around you and will transcribe and analyze that information for any kind of future use. And another very interesting device that did a big splash at IFA Berlin this week is the Rocket Glasses. Rocket has debuted their glasses as a more of a research prototype back in CES earlier this year, and now they have a fully ready to go model that they're actually sold very successfully as pre-orders on Kickstarter. Their goal was to get to a$20,000 revenue from the Kickstarter campaign that they're running, and they got to a million dollars in 72 hours. It has a 12 megapixel first person camera for POV capture in either vertical or horizontal modes. It is integrated with premium audio for music and calls and notification. It has a, a heads up display. It actually displays stuff on the lenses that you can see overlays to get different information around the world around you. It has ChatGPTPT native assistant that includes real time, multi-language translation, instant object recognition, problem solving, audio memos, turn by turn, navigation instructions, wherever you are, and so on and so forth. And the Chinese market version of it also adds wireless payments, so you can actually pay with the glasses everywhere you go. It weighs only 49 grams and it has a 210 milliamp hour battery compared to the 1 54 milliamp hour battery from the metals glasses, so a bigger battery as well, and the lenses can pop off. So if you have any kind of vision issues, you can actually use prescription lenses as part of this package. Now it's not cheap. It's going to retail for$600 or 5 99. If you wanna be specific. But it sounds like the top glasses right now in the world. As you know, meta has been working for a while on their next generation of glasses that will have a display and not just the ability to see the world. And this connects to the previous thing that we talked about. We need to start getting used to the idea that everybody around us, whatever they're gonna be wearing, whether it's gonna be buttons on their shirt or necklaces. Or something in their pockets or their glasses or anything else. We'll record and analyze everything around them, whether we agree to that or not. Again, that raises a very, very, very long list of issues, whether I agree that you will film me and record me, or will you analyze what I'm saying? Or maybe you're even sitting just at the next table at the bar or at the restaurant and you can still record everything that I'm saying, even if you weren't planning to, but your device doesn't know better. So it's going to do that. This is very, very problematic. Take that into schools, universities, bathrooms, like the list goes on and on of how this can go wrong, but I don't see a way around it. I just see the future as we'll get used to the fact that everybody's recording and analyzing everything that's happening around them, and that's just gonna be the new norm. Am I happy about it? No. Do I see some exciting aspects to it? A hundred percent. As a geek, I can definitely see how a tool like this can be very helpful in multiple situations. Uh, but I also see it as really, really problematic. And again, I don't hear any conversation about where do we put the line in the sand? How do we put the line in the sand to make sure that this is not abused in ways that it shouldn't? That's it for this weekend news episode. We're going to be back on Tuesday with an incredible episode that is going to show you how to build a AI automation process that can research what is successful from a content perspective right now, and then how do your automation can mimic that and generate new content that is your content based on your needs, but that is replicating the success that other people are getting based on both the text and the visual aspect of posts on social media and YouTube. This is a really amazing, fascinating episode that you don't wanna miss. Until then, keep on exploring ai, keep testing, keep learning, keep sharing what you're learning with other people. If you are finding value in this podcast, please click on the subscribe button so you don't miss any of the two episodes that we're coming up with every single week, and share it with other people, many other people that you know can benefit from learning how to use ai. And we're doing a very hard work in order to make sure we deliver the best quality, uh, to you twice a week. So if you know other people that can benefit from it, please share it with them. And until next time, have an amazing weekend.

People on this episode