Definitely, Maybe Agile

AI and Automation with David Kilzer

Peter Maddison and Dave Sharrock Season 3 Episode 209

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 36:56

A few times in tech, two streams collide, and everything changes. David Kilzer has spent 50 years putting automation to work in manufacturing and distribution around the world, and he thinks we're at one of those moments right now. The convergence of AI and humanoid robotics, in his view, is the biggest shift humankind has faced since fire.

In this episode, David joins Peter and Dave to unpack where automation ends, and AI begins, why confusing the two creates brittle systems, and what organizations should actually be thinking about when making investment decisions right now. The short version: don't slap AI on everything.

This week's takeaways:

  • Stay optimistic, stay connected, and participate in the change. Don't be overrun by it.
  • Automation works brilliantly within its designed boundaries. But unprecedented events expose its fragility in ways we don't always anticipate.
  • The shift toward flexible, adaptable robots means the environment no longer has to be built around the machine. The machine adjusts to the environment instead.

Peter [0:04]: Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and Dave Sharrock discuss the complexities of adopting new ways of working at scale. Hello everybody, so here we are again. I'm here with Dave and David. I'll go by Dave and David just so this doesn't get confusing for anyone in the audience. Welcome, David. Would you like to introduce yourself?

David [0:27]: My name is David Kilzer. I'm a founding principal at a firm called Strategic Transformation Advisors. That rather pretentious title is built on 50 years — really a half century — of putting automation into manufacturing and distribution at clients around the world. At heart, you'll find I'm a North Dakota farm boy who got his hands dirty early and fell in love with automation at that point. I did the engineering route, added an MBA at IU, and got put to work doing automation by General Electric Company. I've stuck with that for the last half century. Great deal of fun, great deal of change.

Dave [1:10]: As you're describing that, David, I can't help thinking about how much things have changed. We talk a lot right now about AI and the different eras of AI — how it's top of mind for a while, then disappears, then comes back. But right now it feels like everything is about AI. You're describing 50 years of working in automation. Part of our conversation today will be how AI and automation play together. What have you seen over that period of time? What did automation mean some time ago, and how has that meaning changed?

David [1:52]: What strikes me when I look back are the inflection points — moments when two different technologies came together. I should mention that I began automating things when relay logic was done with actual relays, so that stretches back a bit. I saw the arrival of programmable logic controllers, and I can think of some notable convergences. In the 80s, I was working with Digital Equipment Corporation, and in the middle of that work I got involved with a protocol called TCP/IP, and then along came packet switching. Both technologies were powerful, but I didn't have a clear picture of what their child would be. They birthed the internet. So I've seen the power of technology convergences. Another one: when GPS got together with mobile communications and gave us the smartphone, it changed the world.

Now, when I look at the changes indelibly printed into our future, it's almost a certainty in my mind that we're going to see a similar impact from the convergence of AI and humanoid robotics. In a recent TED talk — and this may be a bit hyperbolic — I described it as the biggest thing for humankind since fire. It's going to be that transformative, that powerful in terms of how it reshapes our future.

When I think about the transition from here to there, I think about all the alternative paths that could be followed. How do individuals, professionals, companies, and nation states pick their way through to find the optimum path for themselves and their constituents? That's what we're working on right now with our client base — trying to see a way forward as this hugely impactful technology is adopted at an unprecedented pace.

Peter [4:08]: It's definitely going very fast.

David [4:11]: Trying to project into the future is daunting when that future is sometimes a week away from some of these momentous changes. It's frightening, but it's so much fun. It's exhilarating every day to wake up, read, listen, and interact with experts trying to pull a clearer picture of where we're headed. Every day is an adventure, every day has something new. I can't imagine a more exciting time to be involved with technology.

Dave [4:56]: As you're describing that — because there are areas where it's so disruptive that as exciting as it is, it obviously has consequences. Peter knows this well, but I've got three kids who've just graduated university. A lot of parents with children getting to that university age are worrying about whether they should get a degree, and if so, what degree. You've described a very exciting place with lots of change, which is great when you're surfing the wave. But if you're just trying to get out there on your first surfboard, what areas should people be looking at? Where is the place to build expertise and knowledge?

David [5:43]: That's a great question. I'm a generation ahead of you — I've got grandchildren just now becoming collegiate age.

Dave [5:52]: So you've gone through this twice.

David [5:56]: And of course it comes up in conversation. There's no single answer for everyone. The thing I encourage them is: stay optimistic, stay connected. One of the real dangers of AI is isolation, so stay grounded and be open to change. Don't let anyone — including the small voice in your head — pigeonhole you into one area. And no matter how well an argument is put forward, don't become enamored of an idea just because of the confident way it's presented.

A good analogy is the hallucinations we've all seen coming out of AI. They're clearly incorrect many times, but presented with the utmost conviction and perfectly articulate language. That's a good image for people to carry when they look to the future. Take most of what you hear with a grain of salt.

Peter [7:22]: They do say to believe none of what you hear and half of what you see. Though these days, I think "none of what you see or hear" might be the more accurate version.

David [7:36]: Actively test every proposition with a healthy degree of skepticism, but trust your powers of reasoning. And grow yourself in the area where AI is least likely to intrude — which is invention, imagination, the ability to picture and see the heretofore unseen. AI, as we currently know it, is really good at looking at history and making projections. It is not good at discontinuous developments. That imagination of humankind sets us apart in many ways, and we have to use our ability to see the unseen to help shape and guide how AI will ultimately be applied. That active imagination is the primary tool for maintaining a controlling position in that tug of war between AI and humans.

Dave [9:22]: Before Peter jumps in — I can see him just about to ask a question — I just want to explore this. Automation and AI are two different things solving different kinds of problems. You mentioned your experience with automation, and robotics comes into it as well. What problems is automation exceptional for? What problems is AI exceptional for? And where is there overlap or natural replacement of one with the other?

David [10:07]: Automation has been very powerful in the area of repetitive tasks. If you can train a machine to do something, it can do it more reliably, more rapidly, and with more power than the human alternative it replaced — even the tool-augmented human. That's been a great tool to improve productivity for many of my clients over the years, whether it's handling parts on and off a punch press, sorting apparel in a distribution center, or building boxes for e-commerce. Within a confined set of limits, the results with automation have been very positive.

Where humans have been needed is in the area of ambiguity — being able to handle inputs that fall outside the immediate reach of the automation.

AI, on the other hand, has proven extremely powerful at aggregating and distilling vast amounts of information and generating accurate projections of how the past might extend into the future. I would never recommend following it without a good deal of scrutiny, though. AI draws much of its information from the internet, and the internet contains a wide range of truth. You have to carry that same skepticism about the projections it makes.

The combination of the two is what's becoming truly significant. The attributes of a machine — strength, speed, precision, repeatability — combined with AI systems that can increasingly perceive surroundings, make reasoned decisions, and project optimum outcomes. That's what's expanding the space of collaboration between the world of humans and AI-enabled machines. It's not replacing humans so much as requiring them to direct instead of do. Understanding what machines can and can't do, and what's better done by a human, will be a primary skill for successful people going forward.

Peter [14:24]: Have you seen much of this tension between deterministic processes — where we need the exact same thing done reliably every single time — and applying a non-deterministic system like an AI model, which can potentially have disruptive effects that aren't beneficial when consistency is the goal?

Have you seen ways of helping people understand when not to apply AI? It happens a lot in the digital space — everything must have AI in it. But some of those things are best solved by straightforward automation. We know how to do it, we know how to do it well, and introducing AI complexity into a solved problem is more likely to disrupt than help.

David [15:34]: That's a great point, Peter. We don't have to look too far into the recent past to see the problems you can create when you misapply technology.

One of the great productive organizations in modern retail history is Walmart. Around 2015 or thereabouts, they did a hugely productive job with RPA — robotic process automation — unleashing a plethora of bots to improve efficiency. But that became a real impediment during the unprecedented disruption of the COVID years. They had to back off from that automation because the limits of those systems, the tolerance within which they operated, was not large enough to react to the scale of change COVID brought. Their competitors, because they still had humans in those decision loops, were able to react. Retailers live and die on being in stock without being overstocked, and Walmart found itself in both situations at once because the scale of change was too rapid for the automation. Humans were much more adaptable at working through those circumstances.

Again, AI is great at looking back at history and making accurate, rapid extrapolations. But when something is truly unprecedented, there's nothing in history to provide guidance. Humans handled that far more effectively.

Peter [17:49]: Walmart is an interesting case because as a retail operation they're quite different from most, given their centralized distribution networks. And you do raise an interesting point about applying technology before you have the right fallback plans should it prove to be the wrong fit. I do wonder if we'll see some failures in the market as a consequence of generating far more code than ever before and pushing it into critical systems. It'll be interesting to see what 2026 brings.

David [18:59]: Some early returns on that are already raising some concerning specters. Machine-created code has proven less robust than it first appeared. Again, AI is great at putting a confident face on its projections, but it's not always holding up in the real world.

Dave [19:26]: There's a learning curve, right? Even when implementing automation systems or large language models learning to do something like coding, there's an experience curve you have to go through. And anyone who's shipped product live knows the customers will test it in ways you didn't anticipate. That isn't always pain-free.

David [19:56]: Listening to the customer — the voice of the customer — can get drowned out by the excitement over emerging technologies. Customers' needs still have to dictate the direction. The companies I've worked with that are most successful are the ones where service is the central strength.

That said, the coming change is unprecedented, but not unanticipated. We can see the shapes of what I believe the future will bring. There are certainly difficult valleys to cross from here to there, but I'm very optimistic. One of the things that raises that optimism most is the arrival of AI-enabled robots taking over the drudgery, the dirty, the dangerous work — and freeing humans to do what we do best.

This change is going to hit different industry segments very differently. One of the areas where it will be most impactful is traditional automation. It's going to redefine what a successful application of automation even looks like.

Take spot welding on an automotive production line. Today, it takes enormous precision — all the fixturing, the exact positioning of the vehicle frame so a Scara robot can reach out in 3D space with millimeter accuracy. In the next generation, humanoid AI-enabled robots will be able to compensate for small deviations using vision systems and still yield that accuracy. The supporting infrastructure and precision fixturing become far less critical. The net cost of that automation drops significantly.

When you want to make a change, your ability to make it is dramatically improved. Rather than running a fixture back through a machining center to make tweaks, it's done in software. And AI-enabled changes may take much of that out of the human loop entirely.

I worked extensively with automotive companies in the 80s and 90s. An engineering change order — a frightening prospect — could take months for a very small change in manufacturing. Recently I worked with a tier-one supplier to a large EV manufacturer, and I was absolutely astounded at the rate of engineering change orders they had to adapt to. What took six months in the 80s was being done over a weekend.

When working with clients today, we can't tell them to shut down and wait for the new technology. What we can do is look at every process decision, every piece of automation being put in place, and ask: is this ready? Is this adaptable to what we now see coming? Is this $400,000 Scara robot going to be outdated in a year? When you add flexibility and the ability to adjust product mix to the equation, you're now affecting the revenue side of a company's equation, not just the cost side. Addressing revenue expansion is immensely more powerful for helping a company grow than taking a few percentage points out of the cost structure.

Peter [26:17]: Yes — the operational effectiveness that comes from adaptability. The ability to implement change orders rapidly, the visibility into the system. Dave and I have talked about this in previous episodes as well.

David [26:41]: So one of the questions to put to any client making an automation investment is: did they consider only capital equipment suppliers who could provide them a digital twin? That might be a make-or-break point when evaluating suppliers. Do they use open standards? Do they have an open API — ideally a RESTful API? Adding those criteria to the evaluation, beyond just feeds, speeds, and rates, will rapidly enable the ability to adapt as the technology evolves.

Dave [27:57]: As you're describing this shift from difficult-to-change, tightly constrained automation systems toward humanoid robots that can walk around the factory floor and adapt — we started the conversation talking about AI hallucinations. What about the safety side? If we're moving to humanoid robots with AI instruction sets operating near humans, what happens around safety?

David [28:53]: That's probably one of the most critical words in this whole conversation. One of the words I used earlier was "collaborative," which by definition means you've got humans within reach of robots. Robots can move very fast and very forcefully, and that doesn't always coexist well with a human in the work path.

There are several layers to this. At one level there's the ethical and control dimension — the concern some people have about AI systems taking on too much autonomy. And there's the physical safety dimension, which is just as important.

Historically, we've protected humans from robots through fences and physical barriers. That's not going to work in the flexible, open workspaces that will be needed to take full advantage of this next generation of automation. So how do you protect people?

I was recently at a robot manufacturer's facility in Michigan. They're taking their large, powerful robots and encasing them in sensing material that turns the rapidly moving arm into a cobot capability — it senses and can initiate an emergency stop if something unanticipated enters the path. There's also significant work being done with AI-trained camera systems acting as safety arbiters, monitoring the shared workspace for dangerous situations.

Getting from here to the other side — which has to be incredibly safe environments — is going to require hybrid systems in the interim. We don't want to fully trust those vision systems until they've proven themselves across billions of real-world cycles, the way the EV industry has had to do with autonomous driving. That proved far harder and took far longer than projected. So I'd expect some form of hybrid system for the near to intermediate term, until the replacement is proven to be unambiguously safer than physical separation.

Dave [32:20]: So there are many more layers of complexity to handle, because sharing a space with a robot is a very different proposition from being separated from one.

Peter [32:30]: I think on that note, we're near the end of our time today. That's probably a whole other conversation, because my background in resilience and disaster recovery makes me very curious about some of the directions we could go. But since we're at time, let's do our usual wrap-up. We each share one takeaway for the audience. David, would you like to go first?

David [33:04]: What I hope they take away is a bit of caution and a large dose of optimism. Participate in the change. Don't be overrun by it. Stay optimistic, stay ahead of it, and enjoy life.

Peter [33:29]: That's a good message regardless of context. I might just say we should end right there. Dave, your turn.

Dave [33:37]: The story you told about Walmart and what happened when their automation moved outside its tolerance range really struck me. We tend to think of automation as endlessly expandable — able to navigate any number of problem spaces. But there are very clear boundaries within which it works well, and outside those boundaries — particularly in unprecedented circumstances, black swan events — it can actually become fragile. That's giving me a whole set of questions to explore further. So thanks for that thought to take away.

Peter [34:42]: Now I have to come up with something better than "enjoy life." The thing that stayed with me was your point about flexible, programmable robots. When a robot can adapt to its environment and compensate for variance using vision and intelligence, you no longer need the precision fixtures and tightly controlled tolerances that traditional automation requires. The robot adjusts to the circumstances rather than the circumstances being adjusted for the robot. I'd be very interested to watch how that plays out on production lines that currently require high precision, and what it means as the cost of these robots comes down through scaled manufacturing — much like what happened with cars. We're moving toward a world of many flexible, general-purpose machines rather than few highly specialized ones. That's a genuinely interesting space to watch.

On that note, I'll wrap up. Thank you both for a great conversation, and thank you David for all your insights. Until next time.

Dave [36:22]: David, thank you very much. That was a great conversation.

David [36:25]: Thanks, guys. I hope I didn't come across too academic — this is at heart a greasy-fingered farm boy who likes machines and automation.

Peter [36:42]: You've been listening to Definitely Maybe Agile, the podcast where your hosts Peter Maddison and Dave Sharrock focus on the art and science of digital, agile, and DevOps at scale.