The Bid Picture with Bidemi Ologunde
The Bid Picture is a podcast about building a healthier relationship with technology and using it to live better. Host Bidemi Ologunde delivers three episodes a week: Tuesday quick-hit Briefs with practical frameworks, Thursday candid conversations with entrepreneurs and innovators solving real-world problems, and weekend deep-dive breakdowns of the biggest tech stories (from everyday devices to AI). Less noise, more clarity—so you can use tech wisely and move with intention.
The Bid Picture with Bidemi Ologunde
493. Elon Musk, Sam Altman, and the War Over OpenAI’s Mission
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Email: bidemiologunde@gmail.com
In this episode, host Bidemi Ologunde examines the ongoing legal battle between Elon Musk and Sam Altman over OpenAI's founding mission, corporate structure, and future direction. Was OpenAI built to serve humanity, or has it become another powerful commercial technology company? Who should control advanced AI systems that now influence schools, workplaces, health care, customer service, and everyday decision-making? Through real-world incidents involving chatbot mistakes, fake AI-generated legal citations, and the growing use of AI in daily life, Bidemi explores what this dispute reveals about trust, accountability, healthy technology use, and the kind of AI future society should demand.
In November twenty twenty two, a man in British Columbia named Jake Moffat was trying to do something ordinary under painful circumstances. His grandmother had died, and he needed to fly from Vancouver to Toronto for the funeral. So, like many people now do, he went to a company website and asked the chatbot for help. The chatbot told him that he could buy his Air Canada ticket first and apply for a bereavement discount afterward within 90 days. So he trusted the answer, bought the tickets, took the trip, and later discovered that the airline's real policy said something different. When Air Canada refused the refund, the case went to a tribunal and the company made a remarkable argument. Air Canada suggested that the chatbot was responsible for its own actions. The tribunal disagreed. The chatbot was part of Air Canada's website, and the company had to stand behind the information it gave customers. That small dispute over a few hundred dollars captured something much bigger about the age of artificial intelligence. When machines talk like agents, people treat them like agents, and the responsibility cannot disappear just because the interface has become conversational. Around the same time, another story was unfolding in a New York courtroom. Two lawyers submitted a legal brief containing case citations produced by Chat GPT. The cases looked real, sounded real, and appeared in the familiar language of legal authority. Several of them did not exist. A federal judge sanctioned both lawyers and their firms, making the message clear that professional judgment cannot be outsourced to a tool that invents facts with confidence. So those two incidents are not directly about Elon Musk or Sam Altman, but they are the human doorway into their legal battle. One story involves a traveler trusting the chatbot during grief. Another involves trained professionals trusting an AI system in a court filing. Both stories ask the same question that now sits beneath the Musk vs. OpenAI fight. When AI systems become powerful enough to shape money, law, work, learning, health, and trust, who is responsible for keeping them aligned with human benefit? So today I'm looking at the ongoing legal battle between Elon Musk and Sam Altman, a fight that began as a dispute over the founding promise of open AI and has become one of the most important corporate governance stories in technology. This episode isn't just about two famous men, bruised egos or Silicon Valley drama. The deeper issue reaches into every ordinary household where someone is using an AI chatbot to write an email, understand a medical bill, prepare for an exam, apply for a job, plan a trip, or make sense of confusing information. So the basic story begins in 2015 when OpenAI was founded as a nonprofit organization with a mission to ensure that artificial general intelligence would benefit humanity. Elon Musk, Sam Altman, Greg Brockman, Ilya Sotkeva, and others were part of that early orbit. The public idea was bold and idealistic. If AI became powerful enough to transform civilization, the public needed a counterweight to closed corporate labs. OpenAI would pursue frontier AI with a mission tied to broad human benefit. So that mission matters because artificial general intelligence, usually shortened to AGI, refers to systems that could perform a wide range of cognitive tasks at or above human level. Nobody agrees exactly when such systems will arrive, how to measure them, or how much risk they carry. Yet the phrase drives enormous investment, policy anxiety, and of course public imagination. It also sits at the center of OpenAI's structure because OpenAI has always presented itself as a mission-driven organization trying to build powerful AI safely and broadly. The problem, according to Musk, is that OpenAI moved away from that founding mission. Mosk argues that OpenAI was supposed to remain a nonprofit, open, and oriented toward public benefit. He alleges that the organization's later shift toward commercial partnerships, closed models, and a public benefit corporation structure betrayed promises that helped bring the organization into existence. In his telling, this is a case about charitable purpose, public trust, and whether a non-profit mission can be transformed into a highly valuable commercial machine. OpenAI's response is very different. OpenAI argues that Musk knew the organization needed massive funding to compete at the frontier, that he himself explored or supported for-profit structures, and that he later turned against the company after failing to obtain the level of control he wanted. OpenAI also points to Musk's own AI company, XAI, as evidence that the lawsuit is not only about ideals. According to OpenAI, the lawsuit is also about competition, influence, and an attempt to slow down a rival. That is the legal and public relations battlefield. Mosk says OpenAI abandoned its founding purpose. OpenAI says Musk is rewriting history because he lost influence over a company that became extraordinarily valuable. A judge and jury now have to decide which claims are legally valid, which facts matter most, and what remedy would be appropriate if any wrongdoing is proven. So the timeline is also important. Mosk sued OpenAI and Sam Altman in California State Court in early 2024, then withdrew their lawsuit in June of that year. It revived the fight in federal court in August 2024, again accusing OpenAI of putting profits and commercial interests ahead of the public good. In 2025, OpenAI countersued, accusing Musk of harassment and unfair conduct. By late 2025, OpenAI had completed a major recapitalization, renaming the nonprofit the OpenAI Foundation and creating OpenAI Group PBC as the for-profit public benefit corporation under the nonprofit control. Microsoft, OpenAI's most important partner, held a major investment in that restructured entity. By 2026, the case had reached trial in federal court in Oakland, California. Musk took the stand and framed the case as a defense of charitable giving and public mission. OpenAI's lawyers challenged him on whether there had ever been an enforceable promise to remain nonprofit forever. Judge Yvonne Gonzalez Rogers made one boundary especially clear. The trial was focused on OpenAI's corporate transformation rather than becoming a broad courtroom trial over whether AI will damage humanity. That distinction matters. Musk has often warned about AI safety and catastrophic risk. OpenAI also talks constantly about safety, alignment, and responsibility. Yet the court is dealing with legal claims around corporate structure, charitable trust, alleged promises, unjust enrichment, and whether OpenAI and its leaders violated obligations connected to the founding mission. The public hears a debate about the future of humanity. The courtroom has to examine documents, duties, reliance, money, and governance. So this is where the story becomes more useful for everyday people like you and me. The question of whether OpenAI is controlled by a nonprofit, a public benefit corporation, investors, employees, Microsoft, or some combination of all of them may sound distant from normal life. Yet, governance determines incentives, and incentives determine products. Products determine what shows up in your child's homework, your doctor's documentation system, your bank's fraud alert, your employer's productivity software, your customer service chatbot, and your newsfeed. When a company builds technology at global scale, legal structure becomes a public health issue in the broadest sense. A company under pressure to grow quickly may design tools that maximize engagement. A company under pressure to satisfy investors may push products into workplaces before employees understand their risks. A company under pressure to win market share may make interfaces feel more human, more agreeable, and more emotionally sticky. A company under serious safety obligations may move more slowly, explain limitations more clearly, and accept that some profitable uses should be restricted. Healthy technology begins with that simple recognition. Tools are never neutral once they are embedded in business models, social routines, and institutional decision making. A hammer can be neutral on a workbench, but a recommendation algorithm, a chatbot, or an AI tutor becomes part of a living system. It changes behavior, incentives, trust, and attention. So for listeners of the Bid Ficture Podcast, this case lands directly in the show's mission: building a healthier relationship with technology and using it to live better. A healthy relationship with AI does not require panic. It requires agency. It requires knowing when a tool is useful, when it is unreliable, when it is manipulating your attention, and when a human being or institution must remain accountable. So let's go back to the Air Canada case again. A chatbot was helpful in theory. It gave quick answers, reduced customer service burden, and probably handled thousands of routine questions without incident. Yet, one wrong answer during a moment of grief became a legal dispute because the company wanted the benefit of automation without accepting full responsibility for the output. Healthy technology would mean designing the chatbot with clear guardrails, reliable policy retrieval, escalation to a human in sensitive circumstances, and company ownership of mistakes. Now let's consider the lawyers who cited fake cases. A generative AI tool can help lawyers brainstorm arguments, summarize documents, and make legal services more efficient. Yet, when the system fabricates authority, the human professional has to verify the output. Healthy use means treating AI as an assistant whose work must be checked rather than a source of truth whose confidence substitutes for evidence. The same pattern applies to students. Pew Research Center has reported that a majority of US teens say they use AI chatbots, with many using them for schoolwork, finding information, and entertainment. That reality cannot be wished away by school policies alone. Students need AI literacy, not just AI restrictions. They need to understand when a chatbot is helping them learn and when it is doing the thinking for them. They need adults who can explain the difference between tutoring, cheating, research, summarization, and intellectual dependency. For workers, the stakes are equally practical. Some people are already using AI to draft emails, create presentations, summarize meetings, write code, analyze data, and handle customer messages. Others feel anxious that the tools will replace them, de-skill them, or turn work into constant surveillance. Pew has also found that many US workers are more worried than hopeful about AI's future impact in the workplace. That anxiety is rational when companies introduce AI as a productivity tool without explaining how performance will be judged, how jobs will change, and what protections will exist for workers whose tasks become automated. This is why the Mosk vs. Altman fight is not just a corporate drama. It's a debate over the social contract of AI. If OpenAI began with a mission to benefit humanity, then ordinary people are entitled to ask what benefit means in concrete terms. Does benefit mean cheaper software? Does it mean safer medical diagnostics? Does it mean better education? Does productivity gains shared with workers? Does it mean open research? Does it mean careful deployment? Does it mean a foundation with billions of dollars to spend on public interest projects? These are not abstract questions. When AI is entering schools, hospitals, call centers, law firms, banks, police departments, newsrooms, and of course, our homes. There are strong arguments on multiple sides. Elon Musk's supporters can argue that OpenAI story shows why nonprofit missions need strong legal protection. If donors, researchers, and the public support an organization because it promises to serve humanity, then leadership should not be able to convert that trust into private wealth or strategic advantage. From that perspective, the lawsuit is a warning to every mission-driven tech organization. The original promise matters, and the public deserves enforceable commitment. OpenAI's defenders can respond that Frontier AI is extremely expensive. Training and running advanced models require data centers, chips, engineering talent, cybersecurity, safety research, legal compliance, and global infrastructure. A nonprofit with donations alone may not be able to compete with Google, Meta, Anthropic, XAI, and state-backed efforts abroad. So from that perspective, OpenAI's commercial structure may be an attempt to fund the mission at the necessary scale while keeping the nonprofit formally in control. There is also a competition perspective. Elon Musk runs XAI, which competes with OpenAI. When a lawsuit asks a court to slow a rival's restructuring, change its leadership, or redirect billions of dollars, the public should examine both principle and self-interest. A person can have sincere safety concerns while also having competitive motives. A company can defend its mission while also protecting a massive valuation. The public should resist turning complex governance disputes into simple hero and villain narratives. There is a Microsoft perspective as well. Microsoft's partnership with OpenAI helped push generative AI into mainstream business software, including co-pilot products across Microsoft's ecosystem. Microsoft also became financially tied to OpenAI success. That relationship raises legitimate questions about concentration of power, cloud dependency, enterprise lock-in, and whether a small number of companies will control the infrastructure through which many people encounter AI. At the same time, Microsoft's investment helped make OpenAI's products widely available and accelerated adoption across organizations that wanted AI tools embedded in familiars. The restructured Microsoft OpenAI relationship adds another layer. Microsoft has said it remains OpenAI's primary cloud partner, while OpenAI can serve products across other cloud providers. Microsoft's license to OpenAI intellectual property continues on a non-exclusive basis through the year 2032, and revenue sharing terms continue through 2030 under a cap. These details sound technical, yet they affect market competition, enterprise choice, and the cost structure behind the AI tools that people use every day? For regulators, the legal battle raises questions that go beyond one company. How should governments treat nonprofits to for-profit conversions in technologies that have broad social impact? How should attorneys general evaluate whether charitable assets remain protected? How should courts handle claims that a company's mission statements created enforceable obligations? How should antitrust authorities evaluate partnerships between AI labs and cloud giants? How should consumer protection agencies treat chatbots that provide false, harmful, or misleading advice? For families, the questions are more immediate. Should a teenager use a chatbot for emotional support? Should a parent rely on AI to interpret a medical symptom? Should a worker paste confidential company information into a chatbot? Should a small business replace human support with automated agents? Should a college student use AI to generate a first draft? Should a church, mosque, nonprofit, or community group use AI to write outreach messages? These decisions are already happening at kitchen tables and office desks far away from the Oakland, California courtroom. A healthier relationship with AI begins by separating use from dependence. Use means asking a tool to help clarify, draft, summarize, translate, or brainstorm while keeping human judgment active. Dependence means letting the tool decide what is true, what is ethical, what is emotionally healthy, or what should be done next. That difference becomes especially important when the user is vulnerable, tired, grieving, young, lonely, under a deadline, or facing a decision that has legal, financial, medical, or emotional consequences. This is also where design choices matter. AI systems should make uncertainty visible. They should cite sources when making factual claims. They should escalate sensitive situations to qualified humans. They should avoid pretending to be therapists, lawyers, doctors, or intimate companions unless strict professional safeguards exist. They should protect user data by default. They should make it easy to correct mistakes. They should avoid manipulative emotional language that keeps people engaged beyond their own interests. The best uses of AI are already visible. In healthcare, AI-supported screening has shown promise in helping detect breast cancers earlier and reducing certain later diagnoses in controlled research settings. Wearable devices have shown that they can help identify heart rhythm irregularities that may otherwise go unnoticed. In accessibility, AI can translate speech, summarize complex documents, generate captions, and help people with disabilities navigate digital information. In education, AI can provide personalized explanations when used honestly and with supervision. In small businesses, AI can help owners draft policies, understand customer feedback, and reduce administrative burden. Yet, even the positive examples carry the same lesson. Medical AI should support clinicians rather than replace them. Wearable devices should prompt medical evaluation rather than produce self-diagnosis panic. AI tutors should develop understanding rather than produce polished ignorance. Workplace AI should reduce drudgery rather than intensify surveillance. Customer service AI should resolve routine issues while giving people a clear path to human help. The Musk Altman case reminds us that the architecture behind AI products is not separate from the user experience. Governance shapes safety budgets. Investor expectations shape launch timelines. Competitive pressure shapes marketing claims. Corporate structure shapes who gets paid, who gets heard, and who can say no. When a company says its mission is humanities benefits, the public should ask how that mission survives contact with valuation, market share, and executive power. For open AI, the reputational stakes are enormous. The company became a household name because ChatGPT made generative AI accessible to ordinary people. It also became a symbol of the speed at which AI can move from research lab to global infrastructure. If OpenAI wins the legal battle, it still faces the harder challenge of proving that its structure can align commercial success with public benefits. If Mosk wins meaningful relief, the decision could reshape how mission-driven AI labs raise money, write founding documents, and protect charitable purposes. For Mosk, the stakes are also significant. He helped launch OpenAI, later founded XAI, and remains one of the world's most influential technology figures. His critique resonates with people who worry that AI has become too closed, too concentrated, and too profit-driven. Yet his own companies are for-profit, ambitious, and deeply tied to his personal control. That tension does not erase his arguments, but it complicates them. For Sam Altman, the case puts leadership under a microscope. Altman is trying to build and finance one of the most consequential companies in the world while claiming fidelity to a mission that predates the company's commercial explosion. His challenge is not only legal, it is moral, operational, and communicative. He has to persuade courts, regulators, employees, partners, users, and the public that OpenAI's pursuit of scale has not hollowed out its founding purpose. For the rest of us, the healthiest posture is neither blind trust nor reflexive rejection. AI is already useful. AI is already risky. AI is already ordinary. The public needs a vocabulary that can hold all three truths at the same time. A chatbot can help a student understand algebra and also help that student avoid learning algebra. A medical model can help flag disease and also reflect bias in training data. A workplace assistant can reduce repetitive writing and also become a quiet instrument of performance monitoring. A customer service bot can answer quickly and also mislead a grieving traveler. The practical lesson is simple enough to remember and serious enough to practice. Verify important claims. Keep humans in the loop for high-stakes decisions. Use AI to expand your thinking rather than replace it. Avoid sharing sensitive information unless you understand where it is going. Treat emotional chatbots with caution, especially for children and vulnerable people. Demand clear accountability from companies that deploy AI systems. Ask whether a product helps you live better, think clearer, work healthier, and relate more humanely to other people. There is one more perspective worth adding, especially for a global audience. AI governance debates often focus on Silicon Valley, Washington, Brussels, and Beijing. Yet the everyday consequences will reach Lagos, Nairobi, Accra, Johannesburg, Sao Paulo, Mumbai, Manila, and every place where young people are trying to learn, entrepreneurs are trying to build, and institutions are trying to modernize. If a few companies define the default AI interfaces for the world, then culture, language, local context, and economic access become central questions. Healthy technology must include people who are usually treated as markets rather than decision makers. The big picture lens is useful because it asks us to move beyond spectacle. Elon Musk and Sam Altman are compelling characters, but the most important character in this story may be the ordinary user, you and I. The traveler asking about a bereavement fare, the lawyer rushing to meet a filing deadline, the teenager using the chatbot for homework, the employee wondering whether AI will help or replace them. The patient hoping an early warning is accurate. The parent trying to decide what tools belong in their child's life. The legal battle will eventually produce rulings, settlements, appeals, or remedies. The broader battle will continue in product design, public policy, classrooms, workplaces, and households. The core question will remain: can we build powerful technology without surrendering human responsibility? That question belongs to courts and regulators, but it also belongs to listeners. Every time we use AI, buy AI, deploy AI, teach with AI, or trust AI, we participate in shaping its role. The goal should not be to live without technology. The goal should be to live with technology in a way that keeps human dignity, judgment, accountability, and care at the center. The Mosk vs. Altman case may decide specific legal claims about OpenAI's past. The bigger challenge is deciding what kind of AI future we are willing to accept. A healthy future will require companies that can be held accountable, users who know how to verify, regulators who understand the technology, educators who teach judgment, and leaders whose missions survive success. That is the real bait picture today. The fight over OpenAI is a fight over promises, power, money, and trust. It is also a reminder that the most important technology question is really whether a tool can do something. The better question is whether the tool helps people live safer, smarter, healthier, and more intentional lives.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.