Infinite Machine Learning: Artificial Intelligence | Startups | Technology

Serg Masis on interpretable machine learning, process fairness vs statistical fairness, how to measure interpretability, how to interpret neural networks, how to increase the interpretability of a model

May 29, 2022 Prateek Joshi
Infinite Machine Learning: Artificial Intelligence | Startups | Technology
Serg Masis on interpretable machine learning, process fairness vs statistical fairness, how to measure interpretability, how to interpret neural networks, how to increase the interpretability of a model
Show Notes

Serg Masis is a Data Scientist in agriculture with a background in entrepreneurship and web/app development. He's the author of the book "Interpretable Machine Learning with Python". In addition to ML interpretability, he's passionate about explainable AI, behavioral economics, and ethical AI.

In this episode, we cover a range of topics including:
- How did he get into machine learning?
- What is interpretable ML?
- What is post hoc interpretability?
- Process fairness vs statistical fairness
- How an algorithm creates the model?
- How a model makes predictions?
- What makes an ML model interpretable?
- How do you measure the interpretability of a model?
- How do parts of the model affect predictions?
- Does the method of interpretation depend on the model? Or can we apply a given method to a number of models?
- Can you explain a specific prediction from a model?
- What techniques can we use to interpret neural networks?
- What techniques are available to increase the interpretability of a model?