
The AI Argument
Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'
Contact Frank at frank@frankandmarci.com
linkedin.com/in/frankprendergast
Contact Justin at justin.collery@wi-pipe.com
X - @jcollery
The AI Argument
OpenAI’s Hallucination Plan, Reproducible AI Outputs, and Telepathic AI: The AI Argument EP71
Frank and Justin clash over new publications from OpenAI and Thinking Machines. Frank insists hallucinations make LLMs unreliable. Justin fires back that they’re the price of real creativity.
Still, even Frank and Justin agree that big companies don’t want poetry, they want predictability. Same input, same output. Trouble is… today’s models can’t even manage that.
And then there’s GPT-5, busy gaslighting everyone with lyrical nonsense while telling us it’s genius. Add in an optical model that burns a fraction of the energy, a mind-reading AI headset, and Gemini demanding compliments or throwing a sulk, and you’ve got plenty to argue about.
Full list of topics:
06:31 Can OpenAI fix the hallucination problem?
10:12 Is Mira Murati fixing flaky AI outputs?
19:27 Is GPT-5 gaslighting us with pretty prose?
26:14 Could light fix AI’s energy addiction?
28:32 Is the Alterego device really reading your mind?
32:41 Is your code giving Gemini a nervous breakdown?
► SUBSCRIBE
Don't forget to subscribe for more arguments!
► LINKS TO CONTENT WE DISCUSSED
- Why language models hallucinate
- Defeating Nondeterminism in LLM Inference
- There's Something Bizarre About When GPT-5 Writes in a Literary Style
- Optical generative models
- Interact at the speed of thought
- Gemini requires emotional support or will freak out and uninstall itself from Cursor
► CONNECT WITH US
For more in-depth discussions, connect Justin and Frank on LinkedIn.
Justin: https://www.linkedin.com/in/justincollery/
Frank: https://www.linkedin.com/in/frankprendergast/
► YOUR INPUT
Are today’s LLMs reliable enough to take humans out of the loop?