How do you actually work with AI coding tools in production? From breaking down features into AI-friendly tasks? How to choose between agents vs. manual prompting? How to writing code that LLMs understand better? Where do spec-driven workflows fit in?
0:00 - Introduction
1:09 - What's the WORST thing you can do when adopting AI?
4:44 - Experimentation vs. Following Old Mental Models
7:06 - Working at Feature Level: Breaking Down AI Tasks
10:10 - Cheesy's Workflow: Brainstorming, Stride, and Task Management
13:45 - Phil's Approach: Staying in Flow State vs. Using Agents
16:01 - The Death of Prompts: Plugins and Tools Take Over
18:11 - Context Engineering vs. Prompt Engineering
21:00 - Context Window Size: Bigger Isn't Always Better
23:48 - Spec-Driven Development → Task Management Tools
25:22 - Model Wars: Anthropic vs. Open Source (Qwen, DeepSeek)
30:00 - Should You Short Anthropic Stock? (Philosophical Discussion)
33:00 - Why Claude Code Still Leads Despite Model Convergence
35:01 - Hardware Costs and the Future of AI Accessibility
38:11 - Does Boilerplate Death Change Architecture?
42:00 - When Should You Care About Code Organization with AI?
45:26 - Writing Code FOR LLMs: Semantic JavaScript and Context
47:49 - Wrap Up: Future Topics on LLM-Friendly Code
YouTube: https://bit.ly/3Xfv2bp
Apple Podcasts: https://apple.co/4bNrAJK
Spotify Podcasts: https://spoti.fi/4bZjtcA
LinkedIn Group: https://bit.ly/3wZIWDM
RSS Feed: https://bit.ly/3KsaODW
Twitter: https://bit.ly/4ecWHju
