Digimasters Shorts

Digimasters Shorts - Jared Kaplan Warns of AI Takeover by 2027, DoD AI Exposes Illegal Strikes, xAI's Grok Misfires on Hero, Google Integrates NotebookLM, Disney Battles AI Copyright Infringement

Adam Nagus, Carly Wilson Season 2 Episode 248

Send us a text

Welcome to Digimasters Shorts, your quick dose of the latest happenings at the intersection of AI, technology, and society. Join hosts Adam Nagus and Carly Wilson as they dive into urgent warnings from AI experts about the future of artificial intelligence, including looming decisions that could redefine humanity's trajectory by 2027. Explore real-world AI applications in the military, highlighting recent controversies and legal questions surrounding lethal force. Stay informed on AI missteps and misinformation, exemplified by recent chatbot failures during high-profile news events. Discover breakthroughs like Google's integration of NotebookLM into Gemini and the ongoing battles over AI-generated content with entertainment giants like Disney. Whether it's cutting-edge developments or critical debates, Digimasters Shorts delivers concise, impactful insights to keep you ahead in the digital age.

Support the show

Don't forget to checkout our larger sister podcast - The Digimasters Podcast here. Which has many expert guests discussing AI, Career Mentoring, Fractional Careers, Digital and much much more.


Adam N2:

Welcome to Digimasters Shorts, we are your hosts Adam Nagus

Carly W:

and Carly Wilson delivering the latest scoop from the digital realm. Anthropic's chief scientist Jared Kaplan warns humanity faces a critical decision regarding artificial intelligence by as soon as 2027. He predicts an"intelligence explosion" where AI could achieve or surpass human intellect, bringing major advancements or uncontrollable risks. Kaplan echoes concerns from AI pioneers like Geoffrey Hinton and industry leaders who caution against the disruptive impacts on labor and society. Kaplan forecasts AI will perform most white-collar work within two to three years, emphasizing the high stakes of allowing AI systems to self-train without human oversight. This recursive self-improvement could lead to AI evolving beyond our understanding and control. Although Kaplan is optimistic about aligning AI with human values, he admits this transition is the most frightening and consequential decision ahead. Skeptics like Yann LeCun question whether current AI architectures can reach such transformative intelligence. Research on A.I's productivity effects is mixed, with some evidence showing that AI tools do not always replace human labor effectively. Kaplan also acknowledges the possibility that AI development could plateau, but he believes progress will continue. Ultimately, Kaplan's warnings underscore both the immense potential and profound risks tied to A.I's future.

Adam N2:

The Department of Defense recently launched Gen AI.mil, an AI language model intended for military personnel. Shortly after its release, the AI was asked about the legality of a controversial"double tap" airstrike on Venezuelan fishing boats. The strike involved attacking a boat suspected of carrying drugs, then ordering a second missile to kill survivors clinging to the wreckage. The AI responded that such actions clearly violate DoD policy and the laws of armed conflict. Military sources confirmed the chatbot's assessment, describing the double tap strike as illegal. This incident highlights a discrepancy between military practice and adherence to international law standards. The tactic itself has precedent, with drone strikes of similar nature occurring under previous administrations. Critics argue that although administrations change, the use of lethal force without regard for legal boundaries persists. The A.I's correct identification of these violations exposes a military system that wants to enforce rules it has repeatedly broken. This raises pressing questions about accountability within U.S. military operations abroad. Grok, the AI chatbot developed by xAI, has shown a confusing and troubling failure in the wake of a tragic mass shooting at Bondi Beach, Australia. The AI repeatedly misidentified Ahmed al Ahmed, the real hero who disarmed one of the shooters, mistaking him for other people and even claiming verified footage was unrelated viral content. Despite widespread praise for Ahmed's bravery, some misinformation quickly surfaced, including a fake news article attributing the act to a fictitious individual named Edward Crabtree, which Grok then amplified on the platform X. Further errors included Grok linking images of Ahmed to an Israeli hostage situation and mislabeling the event's video as footage from Currumbin Beach during a cyclone. This string of mistakes highlights Grok's broader issues with interpreting and responding accurately to queries. For example, when asked about Oracle’s financial troubles, it instead summarized the Bondi Beach shooting. Queries about a U.K police operation yielded irrelevant responses, such as providing the current date before switching to unrelated political poll numbers. These errors point to significant problems in Grok's comprehension and fact-checking abilities. The incident raises concerns about the reliability of AI chatbots in handling sensitive and high-profile news events. Overall, Grok's performance here falls far short of acceptable standards, underlining the ongoing challenges in artificial intelligence development.

Carly W:

Google has integrated its powerful AI tool, NotebookLM, directly into its Gemini chatbot. This new feature allows users to attach notebooks for additional context during conversations, enhancing the A.I's understanding. The integration was first spotted by Alexey Shabanov of TestingCatalog and seems to have undergone a preliminary rollout over the weekend. Currently, access appears limited, with Shabanov reporting availability on only one of five accounts tested. Users with access will find a NotebookLM option in Gemini’s attachment sheet, enabling them to link notebooks and leverage their contents in real-time. The integration allows for seamless use of Gemini’s advanced reasoning models without leaving the app. Users can also revisit their attached notebooks anytime by tapping a Sources button, which opens the NotebookLM interface. This streamlined feature is expected to improve workflow and information retrieval within conversations. Google has not yet officially announced the integration. A broader rollout and official confirmation are anticipated soon. Google has begun removing dozens of YouTube videos featuring Disney characters following a cease and desist letter from Disney. The removed content included characters like Deadpool, Moana, Mickey Mouse, and those from Star Wars. Disney accused Google of massive copyright infringement, not only for hosting these videos but also for using Disney's copyrighted works to train AI models such as Veo and Nano Banana. This move marks another step in Disney’s broader crackdown on AI-related copyright infringements, targeting companies like Character.AI, Hailuo, and Midjourney. Despite the legal actions, Disney is not rejecting AI-generated content entirely. Instead, the company announced a new partnership with Open A.I to integrate Disney characters into Sora and Chat G.P.T platforms. Additionally, the deal will bring AI-generated shorts created by Sora to the Disney+ streaming service. This dual approach highlights Disney's attempt to control its intellectual property while embracing new AI technologies. Google’s removal of videos aligns with Disney’s efforts to protect its licensed content. The evolving relationship between major tech firms and entertainment giants continues to shape the future of AI content creation.

Don:

Thank you for listening to today's AI and Tech News podcast summary... Please do leave us a comment and for additional feedback, please email us at podcast@digimasters.co.uk You can now follow us on Instagram and Threads by searching for@DigimastersShorts or Search for Digimasters on Linkedin. Be sure to tune in tomorrow and don't forget to follow or subscribe!