Research and development - Seminars
The continued improvements in the predictive accuracy of machine learning models have allowed for their widespread practical application. However, the need for interpretability is increasingly paramount. This presentation introduces the needs, methods, and challenges in interpretable machine learning. We go through some of the most widely adopted methods such as LIME, SHAP, and counterfactual explanations, and discuss interactive visual analytics tools that can also be used to increase interpretability, such as ViCE (Visual Counterfactual Explanations). We also discuss some of the challenges in the area, particularly recent adversarial techniques that can misguide perturbation-based explanation methods.
YouTube – Quantil Matemáticas Aplicadas
1. Presentation
Get information about Data Science, Artificial Intelligence, Machine Learning and more.