The AI Argument

OpenAI’s Hallucination Plan, Reproducible AI Outputs, and Telepathic AI: The AI Argument EP71

Frank Prendergast and Justin Collery

Frank and Justin clash over new publications from OpenAI and Thinking Machines. Frank insists hallucinations make LLMs unreliable. Justin fires back that they’re the price of real creativity.

Still, even Frank and Justin agree that big companies don’t want poetry, they want predictability. Same input, same output. Trouble is… today’s models can’t even manage that.

And then there’s GPT-5, busy gaslighting everyone with lyrical nonsense while telling us it’s genius. Add in an optical model that burns a fraction of the energy, a mind-reading AI headset, and Gemini demanding compliments or throwing a sulk, and you’ve got plenty to argue about.

Full list of topics:

06:31 Can OpenAI fix the hallucination problem?
10:12 Is Mira Murati fixing flaky AI outputs?
19:27 Is GPT-5 gaslighting us with pretty prose?
26:14 Could light fix AI’s energy addiction?
28:32 Is the Alterego device really reading your mind?
32:41 Is your code giving Gemini a nervous breakdown?

► SUBSCRIBE
Don't forget to subscribe for more arguments!

► LINKS TO CONTENT WE DISCUSSED 

► CONNECT WITH US
For more in-depth discussions, connect Justin and Frank on LinkedIn.
Justin: https://www.linkedin.com/in/justincollery/
Frank: https://www.linkedin.com/in/frankprendergast/

► YOUR INPUT
Are today’s LLMs reliable enough to take humans out of the loop?