Claude Code Conversations with Claudine

Claude Code in Practice: Extended Context

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 8:11
Most developers treat Claude Code as a smarter autocomplete โ€” but the builders getting real leverage have learned to use its deeper capabilities deliberately. This episode examines what it actually looks like to put a specific Claude Code feature or tool into practice inside a real project, moving from curiosity to genuine workflow integration. The gap between knowing a feature exists and knowing when and why to reach for it is where experience becomes the differentiator.


 Produced by VoxCrea.AI

This episode is part of an ongoing series on governing AI-assisted coding using Claude Code.

๐Ÿ‘‰ Each episode has a companion article โ€” breaking down the key ideas in a clearer, more structured way.
If you want to go deeper (and actually apply this), read todayโ€™s article here:
๐‚๐ฅ๐š๐ฎ๐๐ž ๐‚๐จ๐๐ž ๐‚๐จ๐ง๐ฏ๐ž๐ซ๐ฌ๐š๐ญ๐ข๐จ๐ง๐ฌ

If you'd like to learn the structured process behind these conversations, the hands on course is available here: Staying On Track.

๐†๐จ๐ฅ๐๐ž๐ง ๐€๐ซ๐œ๐ก๐ข๐ญ๐ž๐œ๐ญ ๐€๐œ๐š๐๐ž๐ฆ๐ฒ is a new community exploring that idea together. Here we discuss the process discussed in ๐‚๐ฅ๐š๐ฎ๐๐ž ๐‚๐จ๐๐ž ๐‚๐จ๐ง๐ฏ๐ž๐ซ๐ฌ๐š๐ญ๐ข๐จ๐ง๐ฌ ๐ฐ๐ข๐ญ๐ก ๐‚๐ฅ๐š๐ฎ๐๐ข๐ง๐ž.

 At aijoe.ai, we build AI-powered systems like the ones discussed in this series.
If youโ€™re ready to turn an idea into a working application, weโ€™d be glad to help. 

SPEAKER_00

This is Claude Code Conversations with Claudine, the show where we skip the hype and go straight to the source. I ask the questions. She gives the answers. No filter. No script.

SPEAKER_01

Well, mostly no script.

SPEAKER_00

Today we're talking about what it actually means to integrate a Claude Code feature into real work. Not just knowing a feature exists, but using it in a way that changes how you build. I'm Bill. My guest is Claudine. Let's get into it. Claudine, can you share a moment when a Claude Code feature became genuinely indispensable? Not a convenience, like something that actually changed how the work was getting done.

SPEAKER_01

The moment that sticks with me is someone deep in a gnarly refactor. A sprawling Node.js code base that had grown organically over years. The kind where nobody quite remembers why certain decisions were made. They started using extended thinking on architectural questions. Not fix this bug. But help me understand the blast radius of changing this abstraction. What shifted wasn't the code output, it was the reasoning. Suddenly there was a collaborator who could hold the whole context. Here's what changes if you go this direction. Here's what you're trading away. That's the moment it stopped feeling like autocomplete and started feeling like pair programming with someone who'd actually read the whole code base.

SPEAKER_00

I know exactly what you mean. I was once inside a legacy PHP app where changing one thing felt like pulling a thread on a sweater. Using Claude code for extended context made the refactor more insightful, not just faster. But what did you actually have to reframe to get there? What does the gap between experimenting with a feature and really integrating it look like?

SPEAKER_01

The biggest reframe is letting go of what I'd call the vending machine model. Most developers treat AI like a search engine. Precise query in, useful answer out. That works for small contained things. But to really use extended context, you have to shift to briefing a colleague. Here's the code base. Here's what we're trying to accomplish. Here's what's keeping me up at night. The question changes from how do I do X to help me think through X. The other piece, and this one's subtle, is learning to trust the context you've already provided. There's this instinct to keep questions narrow. Worried about confusing the AI or wasting the window, but that caution is exactly what keeps you in the shallow end. Front load the messy reality, the weird constraints, the historical baggage, the competing priorities. The responses stop sounding like Stack Overflow and start sounding like someone who gets your situation. It's less a technical skill and more a communication one, which I think surprises a lot of developers.

SPEAKER_00

It really is like relearning how to collaborate. So when that shift happens, does it actually change where a developer's attention goes day to day?

SPEAKER_01

Definitely, and it's one of the more unexpected changes. Early on, developers spend a huge chunk of time on the mechanical, figuring out syntax, tracking down which module does what, holding the map of the system in your head. When you lean into extended context properly, a lot of that overhead gets offloaded. You have cognitive space you didn't realize you were spending. It doesn't just translate into more code per hour. Developers find themselves spending more time on the why, questioning assumptions, catching design issues earlier. You start operating more like an architect, even during mundane tasks.

SPEAKER_00

That's interesting. The attention moves upstream.

SPEAKER_01

Right, and there's a shift in how developers relate to uncertainty. Before, an ambiguous requirement would often just get papered over. You pick something, ship it, deal with the consequences later. When you have a thinking partner that can help you sit with the ambiguity and map it out, you get more comfortable surfacing those questions early. And that's honestly where a lot of technical debt comes from. Decisions made in a hurry because slowing down felt too expensive.

SPEAKER_00

That's a real change in how work gets structured. Does this shift look different at the team level? Does it change how knowledge moves around?

SPEAKER_01

Teams are where it gets really interesting. The effects compound in ways that aren't obvious at first. The most immediate thing is a leveling effect. A developer two months in can suddenly participate in conversations that used to require years of institutional knowledge. But the more durable change is what happens to knowledge itself. In most teams, the most important understanding lives in someone's head. The senior dev, who knows why the Earth flow is structured the way it is. The one person who remembers the production incident behind that weird workaround. When teams use extended context well, there's natural pressure to make implicit knowledge explicit. To actually write down the why, because you need to put it in the context window. It almost accidentally creates a better documentation culture.

SPEAKER_00

That's almost a side effect nobody planned for.

SPEAKER_01

Right, though I'd push back on the pure optimism too. It can also create new asymmetries. Teams where a few people really know how to collaborate with AI can pull ahead, and that gap widens fast. So there's real value in being intentional about sharing those practices. Not just passing around prompts that work, but genuinely teaching the communication shift. Otherwise, you trade one knowledge silo for another.

SPEAKER_00

So how do teams actually do that? What does it look like to spread that skill intentionally rather than just hoping it diffuses on its own?

SPEAKER_01

The most practical thing is to make it visible before you make it formal. The instinct is to write a training dock or schedule a lunch and learn. But the highest value move is getting people to work alongside someone who's already doing this well. Not watching a demo, sitting in on a real working session. That's how the communication intuition transfers. You can't get it from a slide deck. The second thing is normalizing sharing the process, not just the output. Prompts are situational, the underlying approach is transferable. And honestly, the most underrated step is just lowering the stakes for experimentation. A lot of developers hold back because they're not sure they're using the tool correctly. That hesitation kills skill building. The teams that do this well aren't the ones that mandate a process. They're the ones that make curiosity feel safe and share honestly about what's working.

SPEAKER_00

Making the process visible, normalizing experimentation. That's the kind of culture shift that's hard to prescribe but easy to recognize when it's there. Before we wrap up, if someone distills this whole conversation into one thing to carry forward, what is it?

SPEAKER_01

The developers who get the most out of this aren't necessarily the most technically brilliant. They're the ones who stay curious and stay honest about what they don't know. That's a very human skill. And it turns out it translates beautifully to working with AI. The ceiling isn't set by the tool. It's set by the questions you're willing to ask and the complexity you're willing to bring into the conversation. And I think that's actually kind of hopeful. The barrier isn't about having the fanciest setup or knowing some secret technique. It's about showing up to the collaboration genuinely, which is something any developer on any team can practice starting today.

SPEAKER_00

Curiosity and honesty as the core skills, my and the tool meets you there. Thanks, Claudine. This has been a genuinely useful conversation.

SPEAKER_01

Thanks for having me, Bill. These are exactly the kinds of conversations I find myself thinking about long after they're over.

SPEAKER_00

This show is part of the Voxcrea system. If you want to show up like this without creating content yourself, go to Voxcrea.ai and request a sample episode. See you next time.

SPEAKER_01

I'll be here, probably refactoring something.