The importance of interpretability in machine learning models is growing as they are increasingly applied in real‐world scenarios. Understanding how models make decisions benefits not only the model’s users but also those who are affected by the decisions made by the model. Counterfactual explanations have been developed to cope with this issue, as they allow individuals to understand how they would achieve a desirable outcome by perturbing their original data. In the short term, counterfactual explanation possibly demonstrates actionable suggestions to those who are affected by a machine learning model decision. For example, a person who was rejected for a loan application could know what could have done to be accepted this time and that would be useful to improve on their next application.
Lucic et al. [1] proposed FOCUS, which is designed to generate optimal distance counterfactual explanations to the original data for all the instances in tree‐based machine learning models.
CFXplorer is a Python package that generates counterfactual explanations for a given model and data by using the FOCUS algorithm. This article introduces and showcases how CFXplorer can be used for generating counterfactual explanations.
GitHub repo: https://github.com/kyosek/CFXplorer
Documentation: https://cfxplorer.readthedocs.io/en/latest/?badge=latest
PyPI: https://pypi.org/project/CFXplorer/
- FOCUS algorithm
- CFXplorer examples
- Limitations
- Conclusion
- References
This section briefly introduces the FOCUS algorithm.
The generation of counterfactual explanations is a problem that has been addressed by several existing methods. Wachter, Mittelstadt, and Russell [2] formulated this problem into an optimisation framework, however, this…