A brief overview of XAI Landscape

XAI Landscape

The field of explainable AI is thriving with interesting solutions, showing the potential to address almost any task in any given setting. This outburst of methods and models comes in response to interpretability, being identified as one of the key factors for AI solutions to be trusted and widely deployed.

Depending on the level of complexity and the context of the problem at hand, the most suitable approach can be depicted from the XAI landscape. Machine learning models with a simple structure such as decision trees or linear models can generally handle well problems of low complexity and rather small dimensionality. If this is the case, the model’s behavior can be easily comprehended and visualized. In the opposite case, more complex models must be deployed and the need for explanations is addressed by either post-hoc analysis or by the inclusion of a reasoning component.

The coverage of post-hoc explanations can be either local, when a particular prediction is explained, or global, when the purpose is to comprehend the model’s overall behaviour. At this level, popular explainability techniques are mostly based on model simplification, feature attributions, visualizations, and examples.  Counterfactual explanations are particularly appealing amongst example-based reasoning techniques, providing the minimum change required to a particular input, that would result in an opposite output. Model-agnostic techniques apply to any kind of model, although in many cases model-specific implementations significantly speed up computations, especially for models with high complexity (see e.g. SHAP specific implementations: DeepSHAP, TreeSHAP, kernelSHAP).  Model-specific techniques per se, on the other hand, can often provide more meaningful explanations driven by a specific internal structure. For instance, activation mapping for Convolutional Neural Nets on computer vision tasks, explains the model’s behavior at a higher level, from the identification of small patterns to the recognition of more complex features.

Hybrid methods assign model optimization and the provision of explanations to different model components, that are however coupled and run in parallel: a complex model is responsible for optimizing performance, while an interpretable method optimizes the quality of explanations. The entire spectrum of XAI solutions is supported by open-source software & tools, that can be freely accessed and experimented upon.