How to build Explained, Responsible, Ethical and Fair AI

With Practicus AI v22.8 release, our AutoML received a much needed upgrade. 

You can now build explained, responsible and ethical AIWe also included new features to create fairness analysis for socially accepted algorithms. 

Please check out this recent article from The Guardian to read more about the issues with Black Box AI

‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

Let’s see step-by-step how we can build AI models with improved transparency using Practicus AI.

Explain AI models to fix errors, and to build Responsible/Ethical AI

1) Open Practicus AI app, and start a new cloud node or reuse existing.
2) Open explore tab, click cloud node files to use our sample datasets, and choose diamond.csv
3) Click model, and select “Price” as target
4) Click “Advanced options” and select “Explain”. Click ok to build the model.

5) After the model build is completed, select to view experiment details by clicking the Mlflow tab and opening the explain folder on the left menu.

You can navigate between model explanation graphics.

We also provide interactive SHAP analysis. You can dynamically change the graphic by updating the drop-down boxes on X and Y axises. 

If you found yourself asking what is SHAP?  do not worry, we got you. Please stay tuned for an upcoming blog post  explaining… model explainability.

Fair AI for socially accepted algorithms

How can we build AI systems which are fair for various groups of society?
Let’s see step-by-step how we can create a fairness analysis for sensitive features, such as (in alphabetic order)
  • Age
  • Disability
  • Gender identity
  • Military status
  • National origin
  • Parenting status
  • Pregnancy
  • Race
  • Religion
  • Sex
  • Sexual orientation
1) Open Practicus AI app, and start a new cloud node or reuse existing. 

2) Go to explore tab, click cloud node files and open income.csv. This is a new sample dataset that comes with our cloud nodes. Read more here:

US 1994 census database and the prediction task is to determine whether a person makes over 50K a year

3) Click model, and select” Income > 50K” as your target. 
4) Click advanced, and then fairness check button to select race and sex as “sensitive features”.  

Tip: This is a relatively more complex model and creating detailed explanations will take several additional minutes. If you need AI explainability, prefer a larger cloud node (i.e. 64 cores+) or get a new cup coffee to wait an extra 30 minutes or so.

5) Once the model is completed, open model experiment details and then fairness analysis in the explain folder. 

You will see a detailed break-down analysis of all the sensitive features you select. 

Tip: Not seeing explain folder in mlFlow? Try clicking on the parent run on mlFlow. Or, find the correct run id using registered models section. 


Happy fair and ethical AI days!

Comments are closed.