Heads up! Looks like you're using adblock. Please consider supporting us by whitelisting coursedio.online How?
arrow_back
Go back

Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions

Feb 17, 2022 • Keith McCormick

Start Course arrow_forward

About this course

Learn best practices for how to produce explainable AI and interpretable machine learning solutions.



play_circle_filled

Exploring the world of explainable AI and interpretable machine learning

1m 2s
play_circle_filled

Target audience

1m 18s
play_circle_filled

What you should know

1m 3s
play_circle_filled

Understanding the what and why your models predict

4m 28s
play_circle_filled

Variable importance and reason codes

2m 22s
play_circle_filled

Comparing IML and XAI

4m 23s
play_circle_filled

Trends in AI making the XAI problem more prominent

6m 18s
play_circle_filled

Local and global explanations

2m 23s
play_circle_filled

XAI for debugging models

2m 26s
play_circle_filled

KNIME support of global and local explanations

2m 22s
play_circle_filled

Challenges of variable attribution with linear regression

8m 48s
play_circle_filled

Challenges of variable attribution with neural networks

3m 30s
play_circle_filled

Rashomon effect

4m 42s
play_circle_filled

What qualifies as a black box?

2m 51s
play_circle_filled

Why do we have black box models?

4m 20s
play_circle_filled

What is the accuracy interpretability tradeoff?

4m 31s
play_circle_filled

The argument against XAI

3m 2s
play_circle_filled

Introducing KNIME

4m 8s
play_circle_filled

Building models in KNIME

5m 13s
play_circle_filled

Understanding looping in KNIME

3m 1s
play_circle_filled

Where to find available KNIME support for XAI

2m 53s
play_circle_filled

Providing global explanations with partial dependence plots

4m 58s
play_circle_filled

Using surrogate models for global explanations

1m 52s
play_circle_filled

Developing and interpreting a surrogate model with KNIME

4m 52s
play_circle_filled

Permutation feature importance

1m 11s
play_circle_filled

Global feature importance demo

6m 54s
play_circle_filled

Developing an intuition for Shapley values

4m 35s
play_circle_filled

Introducing SHAP

1m 48s
play_circle_filled

Using LIME to provide local explanations for neural networks

2m 23s
play_circle_filled

What are counterfactuals?

2m 27s
play_circle_filled

KNIME's Local Explanation View node

4m 3s
play_circle_filled

XAI View node demonstrating KNIME

6m 41s
play_circle_filled

General advice for better IML

4m 39s
play_circle_filled

Why feature engineering is critical for IML

2m 10s
play_circle_filled

CORELS and recent trends

4m 59s
play_circle_filled

Continuing to explore XAI

1m 21s