Frequently Asked Questions

Explainable Artificial Intelligence for Manufacturing

Moving from ‘black box’ to ‘glass box’ Artificial Intelligence in Manufacturing

Despite the indisputable notion of benefit that Artificial Intelligence (AI) can bring in society and in any industrial activity, humans typically have little insight about AI itself and even less concerning the knowledge on how AI systems make any decisions or predictions due to the so-called “black-box effect”. Many of the machine learning/deep learning algorithms are opaque and not possible to be examined after their execution to understand how and why a decision has been made. In this context, and to increase trust in AI systems, XMANAI aims at rendering humans (especially business experts from the manufacturing domain) capable of fully understanding how decisions have been reached and what has influenced them.

XMANAI aims at placing the indisputable power of Explainable AI at the service of manufacturing and human progress carving out a “human-centric”, trustful approach that is respectful of European values and principles, adopting the mentality that “our AI is only as good as we are”. It will do so by developing and deploying a novel Explainable AI (XAI) Platform powered by a catalogue of explainable hybrid and graph AI models and a set of manufacturing apps that, together, inspire trust, augment human cognition and solve concrete manufacturing problems with value-based explanations. XMANAI will validate its results in 4 realistic, exemplary manufacturing demonstrators with high impact in:
  • optimizing performance and manufacturing products’ and processes’ quality,
  • accurately forecasting product demand,
  • production optimization and predictive maintenance, and
  • enabling agile planning processes
XMANAI has a trustful ”human-centric” approach that is respectful of European values and principles, adopting the mentality that “our AI is only as good as we are”. The aim is to transform the manufacturing value chain with ‘glass box’ models that are explainable to a ‘human in the loop’ and produce value-based explanations for:
  • Data scientists – to understand the problem at hand, create AI models and derive actionable insights from data in different application domains.
  • Data engineers – to build the necessary underlying infrastructure to collect and prepare data, and to deploy AI models in a scalable manner.
  • Business experts – to understand the results of an analysis in a tangible manner and take more informed decisions depending on the pilot case.
XMANAI aims at unleashing the power of Explainable AI in manufacturing by establishing trust to business experts in what they nowadays regard as black-box algorithms’ results and by bringing forward what the interplay between data, models and human experts should look like for more informed decision making. XMANAI aspires to become one of the flagship and reference Industrial Explainable AI Platforms in manufacturing with a user- driven, industry-led mentality and a market-oriented approach to address the inherent AI-related hurdles in a realistic and tangible manner. The project will deliver “glass box” AI models that are explainable to a “human-in-the-loop”, without greatly sacrificing AI performance. With appropriate methods and techniques to overcome data scientists’ pains such as lifecycle management, security and trusted sharing of complex AI assets (including data and AI models), XMANAI provides the tools to navigate the AI’s “transparency paradox” and therefore:
  • accelerates business adoption addressing the problematic that “if manufacturers do not understand why/how a decision/prediction is reached, they will not adopt or enforce it”, and
  • fosters improved human/machine intelligence collaboration in manufacturing decision making, while ensuring regulatory compliance.
Through a scalable approach towards Explainable and Trustful AI as dictated and supported in XMANAI, manufacturers will be able to develop a robust AI capability that is less artificial and more intelligent at human and corporate levels in a win-win manner.

XMANAI progresses beyond the state-of-the art delivering solid methods in both AI and manufacturing that allow data scientists to build, train and validate a catalogue of hybrid and graph-based, interoperable Explainable AI models for different manufacturing problems, that inspire trust, augment human cognition and comply with ethics principles. Data scientists and manufacturing business experts will work together in XMANAI in order to train and validate the baseline AI models while detecting and mitigating bias in training datasets, leading to a portfolio of trained AI models that address core manufacturing problems. XMANAI will effectively consolidate and securely manage the lifecycle of all AI-related assets, semi-automating the management process and alleviating underlying data- and model-related challenges while establishing seamless collaboration among data scientists, data engineers and business experts.
XMANAI will deliver a novel Explainable AI platform fully aligned with the manufacturing needs and idiosyncrasy, acting as a single reference point of access both for AI and for manufacturing value chain stakeholders that allow a seamless interoperation with on-premise environments through open standard- based APIs. It seeks to integrate and further advance existing data-driven technologies, tools and libraries that accelerate the trusted AI model lifecycle management, turning the AI journey into multi-stakeholder value.

As a research and innovation action project, XMANAI is expected to design and implement a number of innovative Explainable AI-related techniques and complex AI assets (data and models) management-sharing-security technologies to multiply the latent data value in a trusted manner, while integrating into its final platform a plethora of tools and methods coming from open-source initiatives, past and ongoing projects, as well as other services that are highly operational. A non-exhaustive list of XMANAI results includes:
  • XMANAI Core Platform – The core, centralized XMANAI platform, integrating different data-driven services bundles (e.g. for Data Collection, Data Storage, Secure Asset Sharing, Data Manipulation, AI Model Lifecycle Management, AI Insights) to be developed during the project and ensuring seamless communication with the XMANAI on-premise environments and with secure AI execution clusters in the cloud.
  • XMANAI On-Premise Environments  – The on-premise environments to be installed in the stakeholders’ premises for increased end-to-end security, locally processing various data ingestion-manipulation-AI model execution jobs (through on-premise workers) and storage of different assets (data/models/features/experiments/results) in trusted data containers.
  • XMANAI Open APIs – The Open APIs to be designed in accordance with the latest API practices to ensure interoperability of the XMANAI platform with external platforms (particularly the AI4EU platform and manufacturing operational systems).
  • XMANAI Manufacturing Apps – The Manufacturing Apps to be designed bringing different technologies to the factories, and leveraging the AI trained models to solve concrete manufacturing problems.
  • XMANAI Baseline and Trained AI Models – A catalogue of the supported hybrid and graph machine learning and deep learning algorithms that are offered as: (a) a baseline implementation, and (b) a trained and validated configuration for the demonstrators’ purposes.
  • Manufacturing Data Model & Knowledge Graphs – An extensible manufacturing data model and knowledge graphs building on existing manufacturing semantic models and ontologies.
The MVP refers to a version of a product with the minimum set of features and functionalities that can satisfy early adopters who, in turn, can promptly provide feedback for future product improvements. As of September 2021, the XMANAI MVP focuses on the Explainable AI features as must-have, i.e.:
  • Collaboration over AI pipelines creation (experiments comparison, history of events, simulations of different settings, models, and methods for same task)
  • Explainability Methods Management (add/remove/configure, register/import)
  • Collaboration over AI model/results/pipelines explanations (application of explainability methods at AI pipeline or model level, results querying)
  • Explainability Results Visualisation (various charts, adjust based on user profile)
  • Explainability Results Evaluation (allow manual feedback & results validation)
  • Model Security Assessment.
Such features illustrate that explainability needs to be pursued in XMANAI in three axes:
  • Understanding Data, as the primer towards AI explainability that can be ensured by properly ingesting data, extracting their structure and semantics, and allowing for sample data exploration, summary statistics and visualizations.
  • Explaining Results of AI models in a comprehensive, yet interactive way through different explainability techniques in order to bring to the same page both business users and data scientists / engineers.
  • Understanding the inner workings of AI models in order to build robust and reliable AI solutions that shall inspire trust to the manufacturers.