Testing 1-2-3 | Hosted by Parasoft

Who's Behind the Wheel • AI, Ethics & Accountability in Autonomous Driving

Various Season 2 Episode 7

As artificial intelligence (AI) continues to transform the automotive industry, one question remains at the center of innovation: Who’s accountable when AI goes wrong? In this thought-provoking episode of Testing 1-2-3, Parasoft’s Arthur Hicken (a.k.a. The Code Curmudgeon) and Chief Marketing Officer for Parasoft,  Joanna Schloss tackle the ethical, legal, and technical challenges posed by AI in autonomous vehicles.

🚗 What You’ll Learn in This Episode:

  • What is the "trolley problem," and why is it constantly referenced in discussions about AI ethics?
  • Who’s legally responsible when an autonomous vehicle causes an accident— the manufacturer, the developer, or the driver?
  • How Volvo’s bold move to accept liability may reshape consumer trust in self-driving technology.
  • The role of software testing, quality assurance, and transparency in minimizing real-world risks.
  • Surprising responses from popular AI tools like GitHub Copilot when asked about ethical decision-making and liability.

🎙️ Why This Episode Matters:
Whether you're a developer building enterprise applications, a QA engineer testing machine learning models, or a tech leader steering digital transformation, the implications of AI-driven automation go far beyond code. As AI systems increasingly influence safety-critical decisions, understanding your ethical and legal responsibilities as a technologist is essential.

🧠 Real Talk, Real Risks:
Arthur shares firsthand experiences using open-source software to enable semi-autonomous driving in his own vehicle—raising important questions around consent, responsibility, and risk. Joanna draws a compelling parallel to the real-world fallout of the 2024 CrowdStrike endpoint failure, highlighting how even non-automotive software can impact public safety in unexpected ways.

💡 Key Takeaways:

  • Autonomous driving isn’t just a hardware problem—it’s a software quality problem.
  • Product liability laws are being tested in new ways as AI makes more independent decisions.
  • Software engineers must think beyond functionality to ethics, safety, and accountability.
  • AI may lack principles, but your software design and testing shouldn't.

Don’t miss this essential conversation for anyone working with or around AI.
Hit play, subscribe to Testing 1-2-3, and join us as we break down the intersection of ethics, automation, and accountability—one question at a time. 

🔗 Explore More from Parasoft

Stay connected and dive deeper into the world of automated software testing and AI-driven quality assurance:

Join our community for the latest insights, episodes, and discussions on software testing, AI integration, and quality assurance best practices.