
Ethics Untangled
Ethics Untangled is a series of conversations about the ethical issues that affect all of us, with academics who have spent some time thinking about them. It is brought to you by the IDEA Centre, a specialist unit for teaching, research, training and consultancy in Applied Ethics at the University of Leeds.
Find out more about IDEA, including our Masters programmes in Healthcare Ethics and Applied and Professional Ethics, our PhDs and our consultancy services, here:
ahc.leeds.ac.uk/ethics
Ethics Untangled is edited by Mark Smith at Leeds Media Services.
Music is by Kate Wood.
Ethics Untangled
43. How Do You Assure AI For Bias and Accessibility in the NHS? With Adam Byfield
Adam Byfield is a Principal Technical Assurance Specialist at NHS England. After his previous appearance on the podcast, discussing providing ethical assurance for AI applications in healthcare, we were keen to get him back to dive into some more specific issues. We chose bias and accessibility, two related issues that are clearly central for anyone concerned with AI, including in healthcare applications. We talked about different forms of bias, how bias can affect accessibility and what forms of bias, if any, might be acceptable.
Ethics Untangled is produced by IDEA, The Ethics Centre at the University of Leeds.
Bluesky: @ethicsuntangled.bsky.social
Facebook: https://www.facebook.com/ideacetl
LinkedIn: https://www.linkedin.com/company/idea-ethics-centre/