XMANAI progresses beyond the state-of-the art delivering solid methods in both AI and manufacturing that allow data scientists to build, train and validate a catalogue of hybrid and graph-based, interoperable Explainable AI models for different manufacturing problems
XMANAI progresses beyond the state-of-the art delivering solid methods in both AI and manufacturing that allow data scientists to build, train and validate a catalogue of hybrid and graph-based, interoperable Explainable AI models for different manufacturing problems
Moving from ‘black box’ to ‘glass box’ Artificial Intelligence in Manufacturing
LEARN MORE
XMANAI aims at placing the indisputable power of Explainable AI at the service of manufacturing and human progress
Previous
Next

What is artificial intelligence (AI) and how does it work? For many people, these questions are not easy to answer: this is due to the fact that many machine learning and deep learning algorithms cannot be examined after their execution. The EU-funded XMANAI project will focus on explainable AI, a concept that contradicts the idea of the ‘black box’ in machine learning, where even the designers cannot explain why the AI reaches at a specific decision. XMANAI will carve out a ‘human-centric’, trustful approach that will be tested in real-life manufacturing cases. The aim is to transform the manufacturing value chain with ‘glass box’ models that are explainable to a ‘human in the loop’ and produce value-based explanations. (Read more)

EMERGING EXPLAINABLE AI (XAI) CIRCLE (AI TRUST LEVEL 1)

Traditional AI algorithms in which the focus of data scientists is on understanding the data at hand by exploring the industrial data, experimenting with the extracted features to construct appropriate AI models and visualizing the results. Such visualizations typically act as the primer for communicating results to the business experts in order to inform them on how to take action.

DEVELOPING EXPLAINABLE AI (XAI) CIRCLE (AI TRUST LEVEL 2)

“Hybrid” AI algorithms since the typical basic, machine learning and deep learning algorithms are complemented by post hoc interpretation methods, e.g. surrogate models. Explainability of results for the business experts is also pursued through elicited explanations by the data scientists and by highlighting the features that are necessarily present/absent from a prediction.

ESTABLISHED EXPLAINABLE AI (XAI) CIRCLE (AI TRUST LEVEL 3)

Intrinsically interpretable AI models including Graph AI models that provide context for decisions with their knowledge graphs, efficiency and credibility by highlighting which individual layers and thresholds have led to a prediction. Business experts on their behalf come across the emerging explanation interfaces that help them relate to how data, predictions and algorithms actually influence their decisions.

LATEST NEWS AND ARTICLES

PARTNERS