Moving from ‘black box’ to ‘glass box’ Artificial Intelligence in Manufacturing, with XMANAI

Artificial Intelligence (AI) is finding its way into a broad range of industries, manufacturing amongst them. The sorts of decisions and predictions being made by AI-enabled systems is becoming much more profound, and in many cases, critical to success and profitability, with companies gaining the potential to double their cash flow within the next five years. Manufacturing is definitely in the leading group due to its heavy reliance on data.

Machine learning’s inherent advantages in finding anomalies in the production and packaging processes have significant potential to increase throughput and quality of products as well as to reduce machinery downtime and maintenance costs. In the cusp of Industry 5.0 that brings human-machine collaboration and personalization in the foreground, the need for diffusion of AI to solve existing and future manufacturing problems across diverse innovation pathways towards the 2025 Factories of the Future has never been greater.

However, despite the indisputable benefits that AI can bring in society and in any industrial activity, humans typically have little insight about AI itself and even less concerning the knowledge on how AI systems make any decisions or predictions due to the so-called “black-box effect”. Can you explain why and how your system generated a specific decision?

The inner workings of machine learning and deep learning are not exactly transparent, and as algorithms become more complicated, fears of undetected bias, mistakes, and miscomprehensions creeping into decision making, naturally grow among manufacturers and practically any stakeholder. If not addressed properly, the lack of explainability and trust might jeopardize the full potential of AI. This is where Explainable AI (XAI) and the XMANAI project comes into action!

XAI is an emerging field that seeks to make the internal logic and output of AI algorithms transparent and interpretable, addressing how black box decisions of AI systems are made, inspecting and attempting to understand the steps and models involved in decision making. Making these processes humanly understandable and answering questions such as the previous greatly increases human trust. As the goal of AI is to support and optimize processes, people must feel empowered and know how the system works.

Whether by pre-emptive design or retrospective analysis, XAI is already applying methods to add interpretability to the AI output by mapping outputs with inputs. Simpler forms of machine learning such as decision trees and Bayesian classifiers, that have certain amounts of traceability and transparency in their decision making can provide the visibility needed for critical AI systems. Since more complicated algorithms such as neural networks, sacrifice transparency and explainability for power, performance, and accuracy, interpretation methods are often applied after the model training to interpret results. Recently, graph machine learning and the potential of graphs in AI have also started gaining significant attention by AI scientists. Hence, building on the latest AI advancements and technological breakthroughs, XMANAI shall focus its research activities on XAI in order to make the AI models, step-by-step understandable and actionable at multiple layers (data-model-results).

The project will deliver “glass box” AI models that are explainable to a “human-in-the-loop” maintaining security and privacy aspects, without greatly sacrificing AI performance. XMANAI provides the tools to navigate the AI’s “transparency paradox”, designing, developing and deploying a novel Explainable AI Platform powered by explainable AI models that inspire trust, augment human cognition and solve concrete manufacturing problems with value-based explanations. Adopting the mentality that “AI systems should think like humans, act like humans, think rationally, and act rationally”, a catalogue of hybrid and graph AI models will be built, fine-tuned and validated in XMANAI at 2 levels:

  • baseline AI models that will be reusable to address any manufacturing problem, and
  • trained AI models that have been fine-tuned for the different problems that the XMANAI pilot demonstrators’ target.

Through the 42 months of the project, a bundle of innovative manufacturing applications and services will also be built on top of the XMANAI Explainable AI Platform, leveraging the XMANAI catalogue of baseline and trained AI models. Finally, XMANAI will validate its AI platform, its catalogue of hybrid and graph AI models and its manufacturing apps in 4 realistic, exemplary manufacturing demonstrators with high impact in: (a) optimizing performance and manufacturing products’ and processes’ quality, (b) accurately forecasting product demand, (c) production optimization and predictive maintenance, and (d) enabling agile planning processes.

Through a scalable approach towards Explainable and Trustful AI as dictated and supported in XMANAI, manufacturers will be able to develop a robust AI capability that is less artificial and more intelligent at human and corporate levels in a win-win manner.