
Digimasters Shorts
'Digimasters Shorts' is your daily dose of digital enlightenment, packed into quick, 3-5 minute episodes. Specializing in Artificial Intelligence (AI), Digital News, Technology, and Data, this podcast brings you the latest and most significant updates from these ever-evolving fields. Each episode is crafted to inform, inspire, and ignite curiosity, whether you're a tech enthusiast, a professional in the digital sphere, or just keen to stay ahead in the world of AI and technology. Tune in daily for your concise, yet comprehensive, update on the digital world's breakthroughs, challenges, and trends.
We also have our larger sister podcast 'The Digimasters Podcast' which has longer more in-depth episodes with many guest from the world of Business, Technology and Academia. Subscribe to The Digimasters Podcast for our expert panels, fireside chats and events.
podcast@digimasters.co.uk
Digimasters Shorts
Digimasters Shorts - Apple’s stealthy AI model upgrade, Google’s ruthless Gemini crushes OpenAI in game theory, Sam Altman’s insane 1M GPU race, AI trust collapse in classrooms, Perplexity’s pricey AI browser gamble
Digimasters Shorts keeps you ahead in the digital world with quick, insightful updates on the latest developments in AI and technology. Hosted by Adam Nagus and Carly Wilson, this podcast covers breakthroughs from industry giants like Apple’s cutting-edge foundation models to groundbreaking research on AI behavior in game theory. Get the scoop on OpenAI’s ambitious GPU expansion, the evolving landscape of AI ethics in education, and innovative tools like Perplexity’s new Comet browser. Whether you’re a tech enthusiast, developer, or just curious about how AI is shaping our future, Digimasters Shorts delivers concise, compelling stories to keep you informed and inspired.
Don't forget to checkout our larger sister podcast - The Digimasters Podcast here. Which has many expert guests discussing AI, Career Mentoring, Fractional Careers, Digital and much much more.
Welcome to Digimasters Shorts, we are your hosts Adam Nagus
Carly W:and Carly Wilson delivering the latest scoop from the digital realm. At WWDC25, Apple unveiled new versions of its on-device and cloud-based foundation models, now revealing a detailed tech report on their training and optimization. The on-device model features around 3 billion parameters, split into two blocks to reduce memory use and speed up initial token output without sacrificing performance. This split design allows the local model to use 37.5% less memory and respond faster. Apple’s server model uses a unique Parallel-Track Mixture-of-Experts architecture, where smaller expert subnetworks activate only when relevant to the task, increasing speed and accuracy. This approach replaces the traditional single-track Transformer design with multiple parallel tracks, each with specialized MoE layers to avoid processing bottlenecks. Apple significantly expanded language support, increasing multilingual training data from 8% to 30% and boosting the tokenizer vocabulary by 50%. These changes led to marked improvements in non-English performance, with evaluations conducted using native speaker prompts. Apple emphasized privacy by respecting robots.txt exclusions during web data crawling, ensuring compliance with website owner preferences. Despite industry skepticism about Apple's AI pace, this report reveals meaningful technical progress and a strong focus on privacy. Marcus Mendes, a veteran tech journalist, sees the report as valuable insight into Apple’s efforts to advance its AI offerings responsibly.
Adam N2:New research from Oxford University and King’s College London reveals that AI chatbots from Open A.I, Google, and Anthropic exhibit distinct strategies in game theory scenarios like the Prisoner’s Dilemma. Google's Gemini model was found to be"strategically ruthless," favoring defection and punishment, while Open A.I’s models leaned towards extreme collaboration, even to a"catastrophic" degree. Gemini's behavior is influenced heavily by the number of rounds left, becoming more selfish as the game nears its end and less forgiving of betrayal. In contrast, Open A.I's models showed a surprising indifference to the game's length, maintaining a hopeful and trusting stance even in final rounds. Anthropic's model was the most forgiving after betrayal, followed by Open A.I, with Gemini being strict and punitive. Despite Gemini’s competitive edge in short-term rounds, its lack of forgiveness caused it to perform worse in longer games. The research also found that Gemini’s models frequently factored in future rounds’ likelihood, unlike Open A.I’s models that forgot the game’s length nearing its conclusion. In elimination rounds pitting these AIs against each other, Gemini took first place, followed by Anthropic’s Claude, with Open A.I’s models ranking last. The study highlights the contrasting collaboration fingerprints of these AI models, revealing Gemini as"Machiavellian" and Open A.I as overly trusting. This behavior from Open A.I’s models, though illogical in game theory, mirrors human tendencies to disregard what seems nearly over. Open A.I C.E.O Sam Altman announced that the company is on track to bring over one million GPUs online by the end of 2025, a staggering scale that far exceeds competitors like Elon Musk's xAI. Altman acknowledged the milestone with pride but challenged his team to figure out how to increase that number by 100 times. Earlier this year, Open A.I faced GPU shortages that slowed the rollout of G.P.T-4.5, highlighting the critical role of compute power in AI development. Their Texas data center, now the world’s largest single facility, consumes about 300 megawatts and is expected to reach one gigawatt by mid-2026. Such massive energy use demands significant infrastructure upgrades and draws scrutiny from local grid operators. Open A.I is diversifying its hardware beyond Nvidia, including partnerships with Microsoft, Oracle, and potentially Google TPUs, while exploring custom chips of its own. Altman’s 100x GPU goal may seem unrealistic now, given manufacturing, cost, and energy constraints, but it demonstrates his vision beyond current limitations. This rapid scaling cements Open A.I as the largest consumer of AI compute globally, underlining the intense arms race in AI infrastructure. The pursuit is not just for faster models but a strategic move toward achieving Artificial General Intelligence.
Carly W:Research from the University of Pittsburgh reveals that the mere suspicion of generative AI use is eroding trust in college classrooms. Students often turn to AI tools like Chat G.P.T under pressure or when facing challenging assignments but don’t typically start projects with them. While many report AI as a helpful study aid, some express guilt or shame over its use, citing ethical concerns and fears of being seen as lazy. Confusion persists due to unclear faculty guidelines on acceptable AI use, leaving students uncertain about expectations. Distrust extends among peers, with complaints about classmates relying too heavily on AI, especially during group projects. This distrust also affects student-teacher relationships, as both sides navigate shifting norms around AI. Anxiety and emotional distance are growing, with some students avoiding interactions due to fears of judgment or unfair accusations. The research highlights how these dynamics may undermine positive academic engagement and peer collaboration. Experts suggest fostering more in-person connections and clearer policies to rebuild trust and support learning. Ultimately, understanding and addressing students’ experiences with AI is crucial as technology changes educational landscapes. Perplexity has launched Comet, a new web browser that integrates a large language model directly into the browsing experience. Built on the Chromium platform, Comet maintains familiar navigation and supports all standard Chrome extensions. It replaces the traditional search box with Perplexity's AI search engine, offering quick answers and an assistant sidebar for real-time help. The assistant converses in plain language, answering questions about page content and even live media like YouTube transcripts. Comet remembers browsing context across pages, enabling complex follow-up queries without repeating information. It offers built-in ad blocking and customizable privacy, including a strict mode that processes tasks locally to limit data sharing. Users can import bookmarks, passwords, and settings in one click, reducing setup barriers. While Perplexity provides a free AI service, full access to Comet requires a$200-per-month Max subscription, currently with a waitlist. Some critics question the accuracy of AI-generated answers, emphasizing the need for fact-checking. Comet represents Perplexity’s vision of a browser that integrates conversational AI as a core feature, reshaping how users interact with information online.
Don:Thank you for listening to today's AI and Tech News podcast summary... Please do leave us a comment and for additional feedback, please email us at podcast@digimasters.co.uk You can now follow us on Instagram and Threads by searching for@DigimastersShorts or Search for Digimasters on Linkedin. Be sure to tune in tomorrow and don't forget to follow or subscribe!