AI-Curious with Jeff Wilser

How to Make Human-First Tech Decisions, w/ Tech Humanist Kate O’Neill

Jeff Wilser Season 2 Episode 5

What does “human-first AI” actually look like when you have to make decisions under pressure, hit numbers, and keep trust intact?

In AI-Curious, we talk with Kate O’Neill — “the Tech Humanist” and author of What Matters Next — about how leaders can adopt AI in ways that strengthen human outcomes instead of quietly eroding culture, morale, and customer experience. We dig into why so many AI initiatives fail for non-technical reasons, how to think beyond short-term wins, and why prompting is less “prompt engineering” and more like learning to delegate clearly.

Key topics:

Prompting as delegation: defining success conditions, constraints, and what “good” means (00:00)

Kate’s early work at Netflix and what personalization taught her about human impact (04:45)

What “human-unfriendly” tech looks like in practice, from subtle friction to scaled harm (09:28)

The Amazon Go example: how small design constraints can scale into behavior change over time (11:19)

AI in the workplace: why “cut, cut, cut” is shortsighted, and what gets lost when you optimize only for this quarter (14:14)

Trust and readiness: why reskilling fails when people don’t believe there’s a future for them (16:45)

The now–next continuum: making decisions that “age well,” not just decisions that look good immediately (17:29)

Preferred vs. probable futures: identifying the delta and acting to move outcomes toward what you actually want (19:22)

“Chatting with Einstein”: using AI to become smarter vs. outsourcing thinking (22:13)

Why most AI pilots fail: human and organizational readiness, not the tech itself (24:02)

Questions → partial answers → insights: building an organizational muscle that compounds (28:21)

Bankable foresight: why Netflix invested early in what became streaming (30:37)

Trend watch: the pivot from LLM hype to agentic AI, and why prompting still matters (38:58)

Sycophancy and “best self” prompting: getting better outputs by being explicit and structured (41:01)

Probability vs. meaning: what LLMs can do well, and what they can’t replace (44:45)

A fun real-world workflow: Kate’s Notion + AI system for hotel coffee-maker recon (46:26)

Career advice in the AI era: adaptability, “human skills,” and shifting definitions of value (49:21)

Guest
Kate O’Neill is a tech humanist, founder and CEO of KO Insights, and the author of What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast. She advises organizations on improving human experience at scale while making emerging technology commercially and operationally real.

KO Insights:

https://www.koinsights.com/about-kate/


Follow AI-Curious on your favorite podcast platform:

Apple Podcasts
Spotify
YouTube
All Other Platforms