The XMANAI project is working to provide the tools to navigate the Artificial Intelligence (AI)’s “transparency paradox”, designing, developing and deploying a novel Explainable AI Platform powered by explainable AI models that inspire trust, augment human cognition and solve concrete manufacturing problems with value-based explanations.

XMANAI is one of the few European projects focusing on eXplainable Artificial Intelligence (XAI) methodology. However, XMANAI doesn't use only XAI to analyze deeper and efficient the data but also introduces many other novel methodologies to provide better data insights. Have you ever listen about Graphical Neural Networks?

The field of explainable AI is thriving with interesting solutions, showing the potential to address almost any task in any given setting. This outburst of methods and models comes in response to interpretability being identified as one of the key factors for AI solutions to be trusted and widely deployed.

The key goal of XMANAI project is the Explainable Artificial Intelligence, a type of AI that aims to address how black box decisions of AI systems are made, inspecting and attempting to understand the steps and models involved in decision making to increase human trust.