Agile Software Engineering

AI Across the Agile Engineering Lifecycle

Alessandro Season 1 Episode 26

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 23:47

In this episode of the Agile Software Engineering Deep Dive, Alessandro Guida explores what happens when AI enters the Agile software lifecycle.

AI is rapidly being adopted across discovery, design, development, testing, and operations. But while it can accelerate execution, it also introduces new risks - from shallow understanding to over-reliance on generated solutions.

This episode examines how AI interacts with core Agile principles.

Where it strengthens engineering practices. Where it can quietly weaken them. And why real value comes from augmentation - not replacement.

Because Agile is not just about moving fast. It is about learning and making better decisions over time.

And that is exactly what we need to protect.

If you are working with AI in your teams - or considering how to introduce it - this episode offers a grounded perspective on how to do it thoughtfully.

Please subscribe to the podcast if you find it useful.

And if you want to go deeper, you can also read the full article in the Agile Software Engineering newsletter.

SPEAKER_01

Welcome to the Agile Software Engineering Deep Dive, the podcast where we unpack the ideas shaping modern software engineering. My name is Alessandro Guida, and I've spent most of my career building and leading software engineering teams across several industries. And today I want to talk about something that we rarely discuss explicitly in our field, even though it quietly influences almost every decision we make. Ethics. In most engineering disciplines, ethics is not something optional or abstract. It is part of the profession itself. It defines boundaries, it guides decisions, it helps engineers navigate situations where technical correctness alone is not enough. In software engineering, however, ethics often remains in the background. We tend to focus on what we can build, how fast we can build it, and how well it performs. And in doing so, we sometimes overlook a more fundamental question Should we build it at all? This question becomes even more relevant as software continues to expand its influence across every part of society. From financial systems to healthcare, from transportation to the way people consume information, software is no longer just a tool. It is an active force shaping behavior and outcomes. And with the rise of AI, we are now building systems whose behavior we cannot always fully predict, explain, or verify before they are released into the world. Systems that learn, adapt, and sometimes act in ways that go beyond what we originally intended. In that context, the role of the engineer changes. We are no longer only solving problems. We are making decisions with consequences that extend far beyond the immediate scope of the system. In this episode, I want to explore what ethics means in software engineering, why it has become more relevant than ever, and how frameworks like the ACM, IEEE Code of Ethics, can help us navigate decisions that are not purely technical. Because in the end, engineering is not only about building systems that work, it is about taking responsibility for what those systems do in the real world. Let's dive in.

SPEAKER_02

Imagine for a second that you are walking into a masses, newly constructed skyscraper.

SPEAKER_00

Okay, yeah.

SPEAKER_02

Or, I mean, picture yourself driving your car across a giant suspension bridge over a bay. When you do that, you don't really break a sweat, right? You aren't worrying about whether the steel girders are gonna snap.

SPEAKER_00

Right. Or if the concrete is just gonna spontaneously crumble beneath you.

SPEAKER_02

Exactly. You just implicitly trust the structure. I mean, you know that the civil engineers and the architects who built it were bound by these incredibly strict, non-negotiable ethical and safety codes. You just trust the process.

SPEAKER_00

Yeah. We take that completely for granted.

SPEAKER_02

We really do. Yeah. But think about what happens when you download a new app on your phone, or you know, when you log into a cloud-based system that manages your life savings, or even a digital medical portal. Do you ever stop and wonder about the ethical code of the person who actually wrote that software?

SPEAKER_00

Almost never. I mean, we rarely do. We take physical safety codes as an absolute given, but digital safety codes like the ethics of how our software is built often feel like this completely unregulated wild frontier.

SPEAKER_02

Oh, absolutely.

SPEAKER_00

There is just this massive disconnect between how we view physical infrastructure and digital infrastructure.

SPEAKER_02

Well, that disconnect is exactly what we are unpacking today. Welcome to today's deep dive. We are drawing from issue number 30 of the Agile Software Engineering newsletter, which dives deep into the ethics of software engineering.

SPEAKER_00

It's a really fascinating issue.

SPEAKER_02

It is. Our mission today is to look at a framework that developers are using to navigate this increasingly unpredictable AI-driven world without, you know, sacrificing our collective well-being. Because it is wild to me that downloading software right now feels kind of like going to a dealership to buy a car and naturally expecting the brakes to work. Right. But in the software world historically, it's as if the engineers were only ever asked by their boss, uh, hey, can you build a car that goes 300 miles per hour?

SPEAKER_00

Yeah, just focusing on the speed.

SPEAKER_02

Exactly. Nobody ever paused the assembly line to ask, wait, should we build a car that goes 300 miles per hour if we haven't figured out how to install the brakes yet?

SPEAKER_00

That is the perfect analogy. Because the focus in tech has overwhelmingly been on execution and speed. You know, can it be built and how fast can we ship it? Those questions have completely overshadowed the moral weight of the product, which asks, should it be built at all?

SPEAKER_02

Right.

SPEAKER_00

In fields like medicine or civil architecture, ethics isn't an afterthought. It's literally day one of your education. First, do no harm. You don't build a structurally unsound bridge just because the client is in a hurry.

SPEAKER_02

Yeah, that wouldn't fly.

SPEAKER_00

Not at all. But applying that same rigid ethical lens to software engineering is, well, it's a relatively new muscle for the industry to flex.

SPEAKER_02

Aaron Powell, let's start by looking at where this ethical gap actually comes from. Because the newsletter traces it back to the very word itself, you know, software.

SPEAKER_00

Aaron Powell Yeah, the terminology matters a lot.

SPEAKER_02

It really does. Let's unpack this. The word software sounds well, it sounds soft. It sounds intangible, like it's harmless. Trevor Burrus, Jr.

SPEAKER_00

Right. It's not heavy machinery.

SPEAKER_02

Trevor Burrus, Jr. Exactly. It's not gear spinning at thousands of RPMs. It's just pixels on a screen and like lines of text in a code editor.

SPEAKER_00

Aaron Powell The word itself creates this illusion of safety. It feels entirely abstract. I mean, if a developer writes a terrible flawed line of code, the computer sitting on their desk doesn't explode and take out the office.

SPEAKER_02

Aaron Powell Right, there's no shrapnel.

SPEAKER_00

Exactly. They just get a red air message on their screen. But the stark reality we are living in now is that this soft, abstract stuff constitutes the hard infrastructure of our modern world.

SPEAKER_02

It really is everything.

SPEAKER_00

It controls our global financial networks, it runs life support medical devices, it manages our power grids, it dictates human behavior and curates the information we consume every single day.

SPEAKER_02

Okay, here is where my brain gets stuck on the civil engineering comparison. Because a civil engineer would absolutely refuse to build a structurally unsafe bridge, regardless of who requested it, right? Or how much money was offered.

SPEAKER_00

Oh, absolutely. They'd lose their license.

SPEAKER_02

Right, because the consequences are visible. They are immediate and irreversible. So is the lack of an ethical pause and software just because bad code doesn't leave physical rubble behind? I mean, if a bridge collapses, it's on the evening news. Everyone sees the wreckage.

SPEAKER_00

Yes.

SPEAKER_02

But if an algorithm fails, does it just look like a harmless glitch to the outside world?

SPEAKER_00

That's exactly it. Because there is no smoking crater, the invisibility of the failure breeds a very specific kind of ethical complacency. The lack of physical rubble is exactly what makes bad software so incredibly dangerous.

SPEAKER_02

Wait, really? Because it's invisible.

SPEAKER_00

Yeah. The consequences of software failures are massive, but they're often completely invisible until it is entirely too late. Think about it. When a bridge fails, the disaster is localized to the people on that bridge.

SPEAKER_02

Right, the immediate vicinity.

SPEAKER_00

But when a piece of software fails, or worse, when it operates exactly as intended, but with catastrophic societal side effects, the impact is global. It scales instantly to millions of people.

SPEAKER_02

Oh wow. And I guess without that smoke and crater, there's just this huge vacuum of responsibility.

SPEAKER_00

Exactly.

SPEAKER_02

I can easily picture a scenario where a developer builds an algorithm that unfairly denies loans to a specific demographic. And when it gets caught, the developer just blames the historical data they were given.

SPEAKER_00

Yeah. It was the data's fault.

SPEAKER_02

Right. And then the managers blame the algorithm as if it's this independent entity making its own choices. Nobody actually takes the blame because there's no single broken beam to point to.

SPEAKER_00

That distancing effect happens constantly in tech. Companies can hide behind the complexity of the code and they just claim it was an unforeseen bug or like unexpected user behavior. Trevor Burrus, Jr.

SPEAKER_02

It's a convenient excuse.

SPEAKER_00

It really is. It allows individuals to completely detach their daily keyboard strokes from the real-world harm those strokes actually cause.

SPEAKER_02

So we've been talking about glitches and oversights like they are accidents, right? But what happens when the destruction isn't a bug, but the actual business model?

SPEAKER_00

Aaron Powell Ah, yeah. That's where it gets really heavy.

SPEAKER_02

Aaron Powell Because it is one thing to talk about an accidental oversight in a loan algorithm. It's entirely another thing to talk about systems that cause harm by design.

SPEAKER_00

Aaron Powell And that shift from accidental harm to intentional optimization is really the defining ethical crisis of modern tech.

SPEAKER_02

Aaron Powell The newsletter points directly at social media algorithms here. Because they're explicit purpose like the primary reason the code was architected and deployed in the first place is to maximize engagement.

SPEAKER_00

Right, to keep you hooked.

SPEAKER_02

Exactly. It's to keep you, and especially young people, trapped in these endless scrolling loops. And that is not a side effect of this system. It's not a glitch.

SPEAKER_00

No, it's the goal.

SPEAKER_02

Right. It is a deliberate, mathematically optimized design choice.

SPEAKER_00

Think of it this way: a traditional civil engineer builds a road to get you efficiently from point A to point B.

SPEAKER_02

Makes sense.

SPEAKER_00

But a social media engineer is often tasked with building a road that is deliberately designed to keep you driving in circles forever, ensuring you never actually reach your destination.

SPEAKER_02

Oh wow.

SPEAKER_00

Because as long as you're trapped in that roundabout, they can keep showing you billboards.

SPEAKER_02

That is a dark, but man, an incredibly accurate way to put it.

SPEAKER_00

It's the optimization of profit at the direct and measurable expense of human well-being, attention spans, and mental health. And for a very long time, the tech industry just called that good business.

SPEAKER_02

Or like growth hacking.

SPEAKER_00

Exactly, growth hacking. But the ethical reality of what they're actually building is finally starting to catch up to them.

SPEAKER_02

Aaron Powell And just when you think you have a handle on the ethics of these engagement algorithms, we have to throw the ultimate multiplier into the mix, which is AI.

SPEAKER_00

Oh, yeah. That changes everything.

SPEAKER_02

Aaron Ross Powell Because artificial intelligence makes everything we just talked about infinitely more complicated.

SPEAKER_00

Aaron Powell With traditional software, I mean, even if it was designed maliciously to keep you scrolling, a human being still wrote that explicit logic. We could audit the code and know exactly how it works.

SPEAKER_02

We could see the roundabout they built.

SPEAKER_00

Right. But with modern generative AI and machine learning, we are building systems whose emergent behaviors we cannot fully predict, explain, or verify in advance.

SPEAKER_02

They learn and adapt outside of their explicit programming. And here's where it gets really interesting for me. Um if an AI starts acting in ways it wasn't programmed to do, who takes the fall?

SPEAKER_00

It's a huge legal and ethical gray area.

SPEAKER_02

Aaron Powell It feels to me like imagine an engineer building a complex maze for a lab mouse, right? But then halfway through the experiment, the mouse suddenly learns how to fly.

SPEAKER_00

Right. It just totally breaks the rules of the maze.

SPEAKER_02

Yeah. It breaks through the ceiling and escapes the building.

SPEAKER_00

Yeah.

SPEAKER_02

So does the engineer get a pass on the blame just because the technology's evolution was completely unpredictable?

SPEAKER_00

Aaron Powell That is the core debate. Because if you are developing a system that you cannot unequivocally prove to be safe, the ethical question isn't how to fix the bug later.

SPEAKER_02

What is it then?

SPEAKER_00

It's whether you should release it to the public at all. AI heavily introduces the dual use dilemma.

SPEAKER_02

Aaron Powell Unpack that for me. What's the dual use dilemma?

SPEAKER_00

Aaron Powell Well, you might train an AI to find vulnerabilities in code so developers can patch them and make systems safer. That's a good thing.

SPEAKER_01

Right.

SPEAKER_00

But that exact same AI can be used by bad actors to discover those same vulnerabilities and exploit them for massive cyber attacks.

SPEAKER_02

Oh man.

SPEAKER_00

So where does the original creator's responsibility begin and end?

SPEAKER_02

Aaron Ross Powell Right. Where is the line? Because you can't just uninvent the flying mouse once it's out there.

SPEAKER_00

Exactly. And the newsletter argues that technical skill is no longer sufficient to solve these issues. You simply cannot code your way out of a moral dilemma.

SPEAKER_02

That makes a lot of sense.

SPEAKER_00

Navigating unpredictable, highly complex tech requires human judgment. We have to stop treating ethics as a rigid list of compliance rules, like um like a checkbox you tick for HR and start treating it as a dynamic navigational tool for daily decision making.

SPEAKER_02

Aaron Powell A navigational tool. I like that. So if technical skill alone can't fix an unpredictable AI or an addictive algorithm, we need a map. We need a formalized framework to actually guide that judgment.

SPEAKER_00

And that is exactly what the source material provides here.

SPEAKER_02

Right. It introduces the ACM and IE software engineering code of ethics. And just for anyone listening who isn't deep in the tech world, ACM and IE are essentially the world's largest and most prestigious associations of technical professionals.

SPEAKER_00

They are the heavyweights.

SPEAKER_02

Yeah. So this isn't just some fringe manifesto written on a blog somewhere. This is the gold standard for the profession.

SPEAKER_00

Aaron Powell And what is really crucial about this specific code of ethics is that it is entirely tech agnostic.

SPEAKER_02

Meaning it applies to everything.

SPEAKER_00

Exactly. It was designed to be resilient over time. It doesn't matter if you are building a simple mobile app, writing firmware for a pacemaker, or training the next generation of generative AI. The underlying principles of human impact remain exactly the same.

SPEAKER_02

Looking at the framework, it seems to pivot on one core theme, which dictates how developers should manage the external impact of their work. And that theme is putting society at the center.

SPEAKER_00

Yes, the public interest.

SPEAKER_02

It says the user or the client is not the center of the universe. The public interest is.

SPEAKER_00

It forces the engineer to look at scale because an engagement feature might work perfectly and seem totally harmless when you test it with one user in a lab.

SPEAKER_02

Sure, it's just one person clicking the button.

SPEAKER_00

But the framework demands that you ask. What happens when one billion people use it simultaneously?

SPEAKER_02

That changes the math entirely.

SPEAKER_00

It really does. Does it cause societal addiction? Does this recommendation algorithm inadvertently amplify political misinformation and erode democratic processes at scale?

SPEAKER_02

You have to step back and look at the forest, not just the tree you are coding, which leads to a massive tension point in the document.

SPEAKER_00

Employer tension.

SPEAKER_02

Yes. The framework says engineers must act in the best interests of their client and their employer, but only when it is consistent with the public interest.

SPEAKER_00

That's a huge caveat.

SPEAKER_02

That caveat is doing a lot of heavy lifting. I mean, it means balancing loyalty to the person signing your paycheck with public safety.

SPEAKER_00

And the source explicitly states that blind execution is an abdication of responsibility. I was just following orders, is not an acceptable defense in software engineering. Wow. If your boss asks you to build something harmful, you have a professional ethical duty to refuse.

SPEAKER_02

Okay, let me push back on this or at least play double's advocate for a second. Because I am thinking about how this actually plays out in the real corporate world. Let's hear it. Let's use a non-tech analogy. If a restaurant owner tells a head chef, hey, use this expired meat for the stew today to save us some money on food costs, the chef refuses. Why? Because serving rotten meat physically poisons people. And the chef knows they could be held personally liable or lose their license. It's a very clear line.

SPEAKER_00

Very clear.

SPEAKER_02

But in tech, it seems infinitely harder for like a mid-level coder sitting in a cubicle to tell vice president of product, no, I will not build this addictive notification feature because it violates the broad public interest. I mean, how does a developer actually push back without just getting fired on the spot?

SPEAKER_00

That is the reality for a lot of people. But the framework provides the exact mechanism for that pushback. The code of ethics acts as a professional shield.

SPEAKER_02

How so?

SPEAKER_00

Without it, a junior developer refusing to build an addictive feature just sounds like they're being insubordinate or just overly opinionated.

SPEAKER_02

Right, like they're just being difficult.

SPEAKER_00

But with the framework, that developer can point to a globally recognized standard. They can actually document their objection in the project's tracking system.

SPEAKER_02

Like in JIRA or something.

SPEAKER_00

Exactly. In JIRA, or the sprint planning notes, they can officially state implementing this data scraping tool violates our profession's ethical standard for user privacy.

SPEAKER_02

Oh, I see. It moves the refusal from a personal feeling to a professional compliance issue.

SPEAKER_00

Yes.

SPEAKER_02

It gives them a vocabulary to fight back against bad management, placing the burden of the ethical breach squarely on the leadership if they force it through anyway.

SPEAKER_00

It empowers the engineer to prioritize long-term societal trust over the company's short-term quarterly gains.

SPEAKER_02

That makes a lot of sense. And that connects directly to how the framework views the software product itself, because the newsletter highlights that developers must ensure their products meet the highest professional standards possible.

SPEAKER_00

Quality is huge here.

SPEAKER_02

Yeah, it makes a really bold claim. Quality is not just a technical attribute, it is an ethical attribute.

SPEAKER_00

Aaron Powell Good enough is an incredibly dangerous phrase in software development.

SPEAKER_02

I can imagine.

SPEAKER_00

If a team releases a digital medical portal or a financial app with good enough security just to meet an aggressive launch deadline, and people's private data is subsequently stolen, that is not a technical oversight.

SPEAKER_02

It's an ethical one.

SPEAKER_00

It is a profound ethical failure. The team knew the standard for safety wasn't met, and they shipped the product to the public anyway.

SPEAKER_02

Aaron Powell Okay, so focusing on the public, managing the employer tension, and demanding high product quality, that all addresses the external impact. Like the stuff the engineer pushes out into the world.

SPEAKER_00

Right, the product.

SPEAKER_02

But the deep dive then turns completely inward. It looks at the internal ecosystem, the people, the processes, and the culture of the developers themselves. Because, well, you obviously can't build ethical products in an inherently unethical environment.

SPEAKER_00

Exactly. The culture dictates the code. The framework places a heavy emphasis on maintaining integrity and independence in professional judgment.

SPEAKER_02

What does that look like in a team setting?

SPEAKER_00

It requires deep intellectual honesty. It means admitting your own limitations, especially when dealing with complex black box systems like AI, and having the fortitude to resist the loudest voice in the room.

SPEAKER_02

Ah, the loudest voice. In a high-stakes tech meeting, the loudest voice is usually the person pushing for the fastest release schedule, right? Or the highest profit margin.

SPEAKER_00

Almost always.

SPEAKER_02

So independence of judgment means having the courage to be the quiet, insistent voice saying, We actually don't understand how this AI model is weighting its decisions yet. We need to slow down before deployment.

SPEAKER_00

But an engineer can only be that courageous voice if management allows it. Organizations that reward speed over quality or compliance over critical thinking, they are breeding grounds for catastrophic outcomes.

SPEAKER_02

It starts at the top.

SPEAKER_00

It does. If a manager penalizes or demotes an engineer for raising a valid safety concern, that manager is ethically responsible for the eventual failure of that product. You cannot ask an individual engineer to be ethical in a vacuum. The corporate organization has to support it structurally.

SPEAKER_02

So if the culture punishes slowing down, the developer is essentially forced to be unethical just to survive in their job.

SPEAKER_00

Exactly. Management sets the ethical ceiling for the entire project.

SPEAKER_02

And that ties right into how developers treat each other. Right. Right. The framework discusses collective responsibility, sharing credit, avoiding blame shifting, supporting colleagues. How does team trust actually translate into safer software architecture?

SPEAKER_00

Think about what happens when developers are trapped in a hyper-competitive, cutthroat environment.

SPEAKER_02

They probably cover their tracks.

SPEAKER_00

They start hiding their mistakes. They don't ask for peer code reviews because they don't want to look weak. They silo their knowledge so they become indispensable.

SPEAKER_02

Oh, that sounds toxic.

SPEAKER_00

It is. If a lead engineer intentionally hides how a critical payment gateway works just to protect their job security, and then they leave the company, the rest of the team is flying blind.

SPEAKER_02

Which is dangerous.

SPEAKER_00

Very, because when that system eventually breaks, it is a catastrophic failure for the user. Team dynamics and psychological safety aren't just HR buzzwords, they are structural requirements for building resilient ethical software.

SPEAKER_02

Which brings us to perhaps the most surprising aha moment in the entire source material for me. The framework states that developers must participate in lifelong learning and constantly uphold their competence.

SPEAKER_00

Yes, principle eight.

SPEAKER_02

Wait, let me make sure I have this right. We usually think of ethics as a strict good versus evil paradigm. You know, don't steal user data, don't lie to the client. The obvious rules. But this asserts that simply failing to update your skills, failing to learn, is an act of ethical breach. That is a massive perspective shift for the listener.

SPEAKER_00

Aaron Powell It really is. It reframes how we view professional stagnation completely. Ignorance and outdated practices cause just as much devastating harm as malicious intent.

SPEAKER_02

So incompetence is an ethical failure. Let's ground this. What does that actually look like in practice?

SPEAKER_00

Imagine a developer is tasked with encrypting a database of user passwords. Instead of reading up on modern cryptographic standards, they use an open source security library that they learned, say, five years ago. Right. But they don't realize it has since been deprecated and is full of known vulnerabilities. They skip reading the weekend patch number. Notes because they are busy or just complacent.

SPEAKER_02

And then what happens?

SPEAKER_00

A year later, a hacker exploits that exact vulnerability, resulting in a credential stuffing attack that ruins thousands of lives.

SPEAKER_02

But the developer didn't set out to hurt anyone. I mean, they didn't intend to leak the passwords.

SPEAKER_00

Intent is irrelevant to the outcome. Think about medicine. If a surgeon uses a 20-year-old surgical technique that the broader medical community has proven to be highly dangerous simply because the surgeon didn't bother to read the latest medical journals.

SPEAKER_02

We call that malpractice.

SPEAKER_00

Exactly. We call it malpractice. Software engineering is reaching that exact same critical threshold. When you are building systems that affect human lives, human finances, and human health, you do not get to say, oops, I didn't know how the new security protocol works. Good intentions absolutely do not prevent bad code. If you are not actively maintaining your technical competence, you are acting unethically because you are putting the public at severe risk.

SPEAKER_02

That is incredibly heavy. But it is also deeply validating. It completely elevates the role of the software engineer from just a coder typing on a keyboard to a true professional, wielding immense societal responsibility.

SPEAKER_00

It really does.

SPEAKER_02

It summarizes the entire journey we've been on today perfectly. Going from that initial illusion of soft, harmless code to the very hard reality that every single keystroke holds moral weight.

SPEAKER_00

Ethics isn't some extra layer of documentation. It's not a boring seminar you sit through once a year to satisfy a corporate mandate.

SPEAKER_02

Right, it's core to the job.

SPEAKER_00

Engineers are not just problem solvers, they are decision makers whose choices echo across society.

SPEAKER_02

Every line of code, every architectural choice, and every single feature that gets pushed to your phone reflects a set of values that are deeply embedded in the build.

SPEAKER_00

And as AI becomes more autonomous and integrated into our daily lives, the space for unintended consequences is only going to expand exponentially. A robust ethical framework is the only navigational tool we have left to manage that uncertainty.

SPEAKER_02

So, what does this all mean for you, the listener? It means that the next time you open an app on your phone or swipe on a recommended video or log into a portal, you need to remember that human values, or sometimes the terrifying lack of them, were baked into every single tap and swipe.

SPEAKER_00

Absolutely.

SPEAKER_02

Software is quite literally a mirror of the people who built it and the corporate cultures that funded it.

SPEAKER_00

And understanding that reflection is going to become even more vital as we move forward into more complex technological eras.

SPEAKER_02

Which leaves me with a final slightly provocative thought for you to mull over. We've talked extensively about how unpredictable AI is and how these systems learn directly from the data we feed them. Well, if advanced AI systems are currently training on the data, the decisions, and the code we are producing today, are the minor ethical compromises we ignore right now going to become the fundamental, unquestionable moral laws of tomorrow's autonomous machines.

SPEAKER_00

Oof. That is a heavy thought.

SPEAKER_02

It is definitely something to think about the next time you blindly click accept terms and conditions.

SPEAKER_00

A very sobering thought to leave on. The foundation we lay today dictates the automated decisions of tomorrow.

SPEAKER_02

It really makes you look at that 300 mile per hour digital car a little differently. Suddenly, you really want to know who is in charge of designing the brakes. Thank you so much for joining us on this deep dive.

SPEAKER_00

It's been a pleasure exploring this with you. Take care.

SPEAKER_01

A colleague, your team, or your network. You can access all episodes by subscribing to the podcast and find their written counterparts in the Agile Software Engineering newsletter on LinkedIn. And if you have thoughts, ideas, or stories from your own engineering journey, I'd love to hear from you. Your input helps shape what we explore next. Thanks again for tuning in. And see you in the next episode.