RESOURCES

DISSEMINATION MATERIALS:

15. NEWSLETTER #10 (May 2024)

Four sections are included in this newsletter that relate to these issues:
  • Project Highlights;
  • Research Excellence;
  • Stakeholder Engagement;
  • Testimonies.

 

Available for download here.

14. Flyer Whirlpool

This flyer contains the problem addressed, the pilot objectives, and, the implemented use cases.

Available for download here.

13. Flyer Unimetrik

This flyer contains the problem addressed, the pilot objectives, and, the implemented use cases.

Available for download here.

12. Flyer Ford

This flyer contains the problem addressed, the pilot objectives, and, the implemented use cases.

Available for download here.

11. Flyer CNH

This flyer contains the problem addressed, the pilot objectives, and, the implemented use cases.

Available for download here.

10. BOOKLET

The XMANAI Booklet contains the main topics:

  • What is XAI?
  • Platform Overview
  • Pilot Use Cases (CNH, Ford, Unimetrik and Whirlpool)

Available for download here.

9. NEWSLETTER #9 (February 2024)

Two sections are included in this newsletter that relate to these issues:
  • XMANAI Platform;
  • Final General Assembly.

 

Available for download here.

8. NEWSLETTER #8 (August 2023)

Two sections are included in this newsletter that relate to these issues:
  • Whirlpool Pilot;
  • XMANAI Hackathon event. 

 

Available for download here.

7. NEWSLETTER #7 (June 2023)

Two sections are included in this newsletter that relate to these issues:
  • UNIMETRIK pilot;
  • Dissemination and Project Activities.

 

Available for download here.

6. NEWSLETTER #6 (May 2023)

There are two sections in this newsletter that are related to these issues:
  • Ford Pilot
  • Hackathon event.
 

Available for download here.

5. NEWSLETTER #5 (April 2023)

This newsletter has two sections related to these issues:
 
  • CNHi Pilot
  • Dissemination and Project Activities
 

Available for download here.

4. NEWSLETTER #4 (December 2022)

This newsletter has four sections related to these issues:
 
  • XAI in Manufacturing
  • Hybrid ML Explainability
  • Draft Catalogue of XMANAI AI and Graph Machine Learning Models
  • Dissemination and Project Activities

 

Available for download here.

XMANAI_Newsletter_3

3. NEWSLETTER #3 (July 2022)

Our third newsletter has these sections:
 
  • The Value of Meaningful Data
  • Towards a Common Data Model
  • XMANAI Graph Data Model 

 

Available for download here.

5. NEWSLETTER #2 (November 2021)

Driven by our key message “Moving from ‘black box’ to ‘glass box’ Artificial Intelligence in Manufacturing”, the project has produced some interesting progress during the second semester. Check out our second newsletter with sections on:

  • Key Stakeholders
  • A Look into our Pilots
  • Requirements Elicitation
  • XMANAI Minimum Viable Product (MVP)
  • Dissemination and Collaboration
 

Available for download here.

4. XMANAI ROLL-UP

XMANAI roll-up to present the overall project concept and objectives. In 2022 it is expected that the covid-19 pandemic allows, at least in a certain degree, the return to the physical/hybrid interaction with the community where the roll-up can be used.

XMANAI focuses on explainable AI, a concept that contradicts the idea of the ‘black box’ in machine learning. Carving out a ‘human-centric’ trustful approach tested in industrial demonstrators, the project aims to transform the manufacturing value chain with ‘glass box’ models explainable to a ‘human in the loop’.

Available for download here.

3. XMANAI TRIFOLD

The trifold is an efective means to disseminate the project both on physical and online events, summarising not only the concept, factsheet, and objectives but also the main results available to the moment. Within the trifold you will find a brief presentation of the pilots and the high level architecture, highlighting the core XMANAI services.    

Available for download here.

2. NEWSLETTER #1 (May 2021)

Driven by our key message “Moving from ‘black box’ to ‘glass box’ Artificial Intelligence in Manufacturing”, the project has produced some interesting progress during the first six months. Check out our first newsletter with sections on:

  • Project Objective
  • Meet Our Team
  • XMANAI Concept and Approach
  • XMANAI Scientific Workshop
  • Collaboration
 

Available for download here.

1. XMANAI FLYER

XMANAI consists of 15 partners from 7 countries Italy, Germany, Spain, Estonia, Portugal, Greece and Cyprus. The consortium is very well balanced in terms of research-industry collaboration as the next table depicts, containing a very well though constructed mixture of collective expertise from industry, research, academia, technology providers with solid background on manufacturing, data science and AI, and big data, and human-machine ethics sectors.  

Available for download here.

PUBLICATIONS:

A methodology to guide companies in using Explainable AI-driven interfaces in manufacturing contexts interfaces in manufacturing contexts

Nowadays, the increasing integration of artificial intelligence (AI) technologies in manufacturing processes is raising the need of users to understand and interpret the decision-making processes of complex AI systems. Traditional black-box AI models often lack transparency, making it challenging for users to comprehend the reasoning behind their outputs. In contrast, Explainable Artificial Intelligence (XAI) techniques provide interpretability by revealing the internal mechanisms of AI models, making them more trustworthy and facilitating human-AI collaboration. In order to promote XAI models’ dissemination, this paper proposes a matrix-based methodology to design XAI-driven user interfaces in manufacturing contexts. It helps in mapping the users’ needs and identifying the “explainability visualization types” that best fits the end users’ requirements for the specific context of use. The proposed methodology was applied in the XMANAI European Project (https://ai4manufacturing.eu), aimed at creating a novel AI platform to support XAI-supported decision making in manufacturing plants. Results showed that the proposed methodology is able to guide companies in the correct implementation of XAI models, realizing the full potential of AI while ensuring human oversight and control.

Available for download here.

Explainability as the key ingredient for AI adoption in Industry 5.0 settings

Explainable Artificial Intelligence (XAI) has gained significant attention as a means to address the transparency and interpretability challenges posed by black box AI models. In the context of the manufacturing industry, where complex problems and decision-making processes are widespread, the XMANAI platform emerges as a solution to enable transparent and trustworthy collaboration between humans and machines. By leveraging advancements in XAI and catering the prompt collaboration between data scientists and domain experts, the platform enables the construction of interpretable AI models that offer high transparency without compromising performance. This paper introduces the approach to building the XMANAI platform and highlights its potential to resolve the “transparency paradox” of AI. The platform not only addresses technical challenges related to transparency but also caters to the specific needs of the manufacturing industry, including lifecycle management, security, and trusted sharing of AI assets. The paper provides an overview of the XMANAI platform main functionalities, addressing the challenges faced during the development and presenting the evaluation framework to measure the performance of the delivered XAI solutions. It also demonstrates the benefits of the XMANAI approach in achieving transparency in manufacturing decision-making, fostering trust and collaboration between humans and machines, improving operational efficiency, and optimizing business value.

Available for download here.

Process and Product Quality Optimization with Explainable Artificial Intelligence

In today’s rapidly evolving technological landscape, businesses across various industries face a critical challenge: maintaining and enhancing the quality of both their processes and the products they deliver. Traditionally, this task has been tackled through manual analysis, statistical methods, and domain expertise. However, with the advent of artificial intelligence (AI) and machine learning, new opportunities have emerged to revolutionize quality optimization. This chapter explores the process and product quality optimization in a real industrial use case with the help of explainable artificial intelligence (XAI) techniques. While AI algorithms have proven their effectiveness in improving quality, one of the longstanding barriers to their widespread adoption has been the lack of interpretability and transparency in their decision-making processes. XAI addresses this concern by enabling human stakeholders to understand and trust the outcomes of AI models, thereby empowering them to make informed decisions and take effective actions.

Available for download here.

XAI for Product Demand Planning: Models, Experiences, and Lessons Learnt

Today, Explainable AI is gaining more and more traction due to its inherent added value to allow all involved stakeholders to understand why/how a decision has been made by an AI system. In this context, the problem of Product Demand Forecasting as faced by Whirlpool has been elaborated and tackled through an Explainable AI approach. The Explainable AI solution has been designed and delivered in the H2020 XMANAI project and is presented in detail in this chapter. The core XMANAI Platform has been used by data scientists to experiment with the data and configure Explainable AI pipelines, while a dedicated manufacturing application is addressed to business users that need to view and gain insights into product demand forecasts. The overall Explainable AI approach has been evaluated by the end users in Whirlpool. This chapter presents experiences and lessons learnt from this evaluation.

Available for download here.

Holistic Production Overview: Using XAI for Production Optimization

This chapter introduces the work performed in XMANAI to address the need of explainability in manufacturing AI systems applied to optimize production lines. The XMANAI platform is designed to meet the needs of manufacturing factories, offering them a unified framework to leverage their data and extract valuable insights. Within the project, the Ford use case is focused on forecasting production in a dynamically changing manufacturing line, serving as a practical illustration of the platform capabilities. This chapter focuses on the application of explainability using Hybrid Models and Heterogeneous Graph Machine Learning (ML) techniques. Hybrid Models combine traditional AI models with eXplainable AI (XAI) tools and Heterogeneous Graph ML techniques using Graph Attention (GAT) layers to extract explainability in complex manufacturing scenarios where data that can be represented as a graph. To understand explainability applied to the Ford use case, this chapter describes the initial needs of the scenario, the infrastructure behind the use case and the results obtained, showcasing the effectiveness of this approach, where models are trained in the XMANAI platform. Specifically, a description is given on the results of production forecasting in an engine assembly plant while providing interpretable explanations when deviations from expected are predicted.

Available for download here.

Toward Explainable Metrology 4.0: Utilizing Explainable AI to Predict the Pointwise Accuracy of Laser Scanning Devices in Industrial Manufacturing

The field of metrology, which focuses on the scientific study of measurement, is grappling with a significant challenge: predicting the measurement accuracy of sophisticated 3D scanning devices. These devices, though transformative for industries like manufacturing, construction, and archeology, often generate complex point cloud data that traditional machine learning models struggle to manage effectively. To address this problem, we proposed a PointNet-based model, designed inherently to navigate point cloud data complexities, thereby improving the accuracy of prediction for scanning devices’ measurement accuracy. Our model not only achieved superior performance in terms of mean absolute error (MAE) across all three axes (XYZ) but also provided a visually intuitive means to understand errors through 3D deviation maps. These maps quantify and visualize the predicted and actual deviations, which enhance the model’s explainability as well. This level of explainability offers a transparent tool to stakeholders, assisting them in understanding the model’s decision-making process and ensuring its trustworthy deployment. Therefore, our proposed model offers significant value by elevating the level of precision, reliability, and explainability in any field that utilizes 3D scanning technology. It promises to mitigate costly measurement errors, enhance manufacturing precision, improve architectural designs, and preserve archeological artifacts with greater accuracy.

Available for download here.

UX-Driven Methodology to Design Usable Augmented Reality Applications for Maintenance

In recent decades industrial development has led to increasingly sophisticated machinery and systems, which require complex maintenance routines. Consequently, maintenance operators may not have the sufficient skills to perform recovery procedures properly and quickly, so that the need of assistance from the manufacturer’s after-sales service or companies specialized in maintenance services. Such actions usually lead to very long recovery times, high maintenance costs, and a temporary drop in production. In this scenario, we should consider that Industry 4.0 is making available innovative technologies, such as Augmented Reality (AR), suitable for improving the skills and competencies of operators without burdening their cognitive load, and consequently wellbeing. However, technologies must be selected, designed, and used according to the users’ needs to be effective and useful. The paper presents a user experience (UX)-driven methodology for designing user- centric AR applications for complex maintenance procedures. The methodology was applied to a real industrial case concerning the management of CNC machines in a plant producing tractors components, where a smartphone-based AR application was designed and tested with users. The satisfactory results highlighted the potential benefits of AR in industry and specifically in maintenance.

Available for download here.

RHALE: Robust and Heterogeneity-Aware Accumulated Local Effects

Accumulated Local Effects (ALE) is a widely-used ex- plainability method for isolating the average effect of a feature on the output, because it handles cases with correlated features well. How- ever, it has two limitations. First, it does not quantify the deviation of instance-level (local) effects from the average (global) effect, known as heterogeneity. Second, for estimating the average effect, it parti- tions the feature domain into user-defined, fixed-sized bins, where different bin sizes may lead to inconsistent ALE estimations. To ad- dress these limitations, we propose Robust and Heterogeneity-aware ALE (RHALE). RHALE quantifies the heterogeneity by considering the standard deviation of the local effects and automatically deter- mines an optimal variable-size bin-splitting. In this paper, we prove that to achieve an unbiased approximation of the standard deviation of local effects within each bin, bin splitting must follow a set of sufficient conditions. Based on these conditions, we propose an al- gorithm that automatically determines the optimal partitioning, bal- ancing the estimation bias and variance. Through evaluations on syn- thetic and real datasets, we demonstrate the superiority of RHALE compared to other methods, including the advantages of automatic bin splitting, especially in cases with correlated features.

Available for download here.

Regionally Additive Models: Explainable-by-design models minimizing feature interactions

Generalized Additive Models (GAMs) are widely used explainable-by-design models in various applications. GAMs assume that the output can be represented as a sum of univariate functions, referred to as components. However, this assumption fails in ML prob- lems where the output depends on multiple features simultaneously. In these cases, GAMs fail to capture the interaction terms of the underlying function, leading to subpar accuracy. To (partially) address this issue, we propose Regionally Additive Models (RAMs), a novel class of explainable-by-design models. RAMs identify subregions within the feature space where interactions are minimized. Within these regions, it is more accurate to express the output as a sum of univariate functions (components). Consequently, RAMs fit one component per subregion of each feature instead of one component per feature. This approach yields a more expressive model compared to GAMs while retaining interpretability. The RAM framework consists of three steps. Firstly, we train a black-box model. Secondly, using Regional Effect Plots, we identify subregions where the black-box model exhibits near-local additivity. Lastly, we fit a GAM component for each identified subregion. We validate the effectiveness of RAMs through experiments on both synthetic and real-world datasets. The results confirm that RAMs offer improved expressiveness compared to GAMs while maintaining interpretability.

Available for download here.

Explainable Artificial Intelligence Bundles for Algorithm Lifecycle Management in the Manufacturing Domain

Lack of understanding for machine learning models’ inner workings by business users inevitably leads to
lack of trust, particularly in critical operations. Recent advancements in artificial intelligence include the development of explainability methods and tools for machine learning and deep learning models. These explainable AI (XAI) techniques can significantly reduce the black box effect that often hinders
direct inclusion and automated integration of ML outputs in decision making. The manufacturing sector is gradually increasing its adoption of AI-enabled systems, a process accelerated in the context of the Fourth Industrial Revolution, strengthening the need for explainability and robust XAI pipelines. This paper presents a framework of processes and tools designed and developed to bring the benefits of explainability in AI-enabled decision making in the manufacturing domain, providing the mechanisms to create production ready, trustful machine learning processes that foster collaboration among stakeholders coming from both business and technical backgrounds.

Available for download here.

A novel Explainable Artificial Intelligence and secure Artificial Intelligence asset sharing platform for the manufacturing industry

Over the past couple of years, implementations of Artificial Intelligence (AI) have significantly risen in numerous platforms, tools and applications around the world, impacting a broad range of industries such as manufacturing towards Smart Factories and Industry 4.0, in general. Nevertheless, despite industrial AI being the driving force for smart factories, there is strong reluctance in its adoption by manufacturers due to the lack of transparency of the black-box AI models and trust behind the decisions taken, as well as the awareness of where and how it should be incorporated in their processes and products. This paper introduces the Explainable AI platform of XMANAI which takes advantage of the latest AI advancements and technological breakthroughs in Explainable AI (XAI) in order to build “glass box” AI models that are explainable to a “human-in-the-loop” without the decrease of AI performance. The core of the platform consists of a catalogue of hybrid and graph AI models which are built, fine-tuned and validated either as baseline AI models that will be reusable to address any manufacturing problem or trained AI models that have been fine-tuned for solving concrete manufacturing problems in a trustful manner through value-based explanations that are easily and effectively interpreted by humans.

Available for download here.

Towards Explainable AI Validation in Industry 4.0: A Fuzzy Cognitive Map-based Evaluation Framework for Assessing Business Value

The development of Artificial Intelligence (AI) systems in Industry 4.0 has gained momentum due to their potential for increasing efficiency and productivity. However, AI systems can be just as complex and opaque, leading to concerns about their reliability, trustworthiness, and accountability. To address these issues, this paper proposes a validation framework for Explainable AI (XAI) in Industry 4.0 based on Fuzzy Cognitive Maps (FCMs). The proposed framework aims to evaluate Key Performance Indicators (KPIs) based on a set of AI metrics and XAI metrics. The FCM-based approach enables the representation of causal-effect relationships between the different concepts of the system using expert knowledge. The presented validation framework provides a theoretical background for evaluating and optimizing the business values of AI systems based on multiple criteria in the manufacturing industry, demonstrating its effectiveness. The main contributions of this paper are: i) the development of an FCM-based validation framework for XAI in Industry 4.0; ii) the identification of relevant AI and XAI metrics for the evaluation of the KPIs of the theoretical graph model; and iii) the demonstration of the effectiveness of the proposed framework through a case study. The results of this study provide valuable insights into the importance of considering not only accuracy but also efficiency and transparency when developing AI pipelines that generate higher business value. Overall, this paper offers a theoretical foundation and practical insights for organizations seeking to evaluate the business values of their AI systems in Industry 4.0. It emphasizes the importance of explainability and the integration of AI and XAI metrics in achieving transparent and accountable AI solutions that deliver optimal results for the manufacturing industry and beyond.

Available for download here.

Industrial Asset Management and Secure Sharing for an XAI Manufacturing Platform

Explainable AI (XAI) is an emerging field that aims to address how black-box decisions of AI systems are made, by attempting to understand the steps and models involved in this decision-making. XAI in manufacturing is supposed to deliver predictability, agility, and resiliency across targeted manufacturing apps. In this context, large amounts of data, which can be of high sensitivity and various formats need to be securely and efficiently handled. This paper proposes an Asset Management and Secure Sharing solution tailored to the XAI and Manufacturing context in order to tackle this challenge. The proposed asset management architecture enables an extensive data management and secure sharing solution for industrial data assets. Industrial data can be pulled, imported, managed, shared, and tracked with a high level of security using this design. This paper describes the solution´s overall architectural design and gives an overview of the functionalities and incorporated technologies of the involved components, which are responsible for data collection, management, provenance, and sharing as well as for overall security.

Available for download here.

Explainable AI in Manufacturing: an Analysis of Transparency and Interpretability Methods for the XMANAI Platform

The use of artificial intelligence (AI) in manufacturing has become increasingly common, but the lack of transparency and interpretability of AI models can limit their adoption in critical applications, due to lack of human understanding. Explainable AI (XAI) has emerged as a solution to this problem by providing insights into the decision-making process of AI models. This paper analyses different approaches for AI transparency and interpretability, starting from explainability by-design to post-hoc explainability, pointing out trade-offs, advantages and disadvantages of each one, as well as providing an overview of different applications in manufacturing processes. The paper concludes presenting XMANAI as an innovative platform for manufacturing users to develop insightful XAI pipelines that can assist in their everyday operations and decision-making. The comprehensive overview of methods serves as the ground basis for the platform draft catalogue of XAI models.

Available for download here.

DALE: Differential Accumulated Local Effects for efficient and accurate global explanations

Accumulated Local Effect (ALE) is a method for accurately estimating feature effects, overcoming fundamental failure modes of previously-existed methods, such as Partial Dependence Plots. However, ALE’s approximation, i.e. the method for estimating ALE from the limited samples of the training set, faces two weaknesses. First, it does not scale well in cases where the input has high dimensionality, and, second, it is vulnerable to out-of-distribution (OOD) sampling when the training set is relatively small. In this paper, we propose a novel ALE approximation, called Differential Accumulated Local Effects (DALE), which can be used in cases where the ML model is differentiable and an auto-differentiable framework is accessible. Our proposal has significant computational advantages, making feature effect estimation applicable to high-dimensional Machine Learning scenarios with near-zero computational overhead. Furthermore, DALE does not create artificial points for calculating the feature effect, resolving misleading estimations due to OOD sampling. Finally, we formally prove that, under some hypotheses, DALE is an unbiased estimator of ALE and we present a method for quantifying the standard error of the explanation. Experiments using both synthetic and real datasets demonstrate the value of the proposed approach.

Available for download here.

“Moving from “”black box”” to “”glass box”” Artificial Intelligence in Manufacturing with XMANAI” 

Artificial Intelligence (AI) is finding its way into a broad range of industries, including manufacturing. The decisions and predictions that can be potentially derived from AI-enabled systems are becoming much more profound, and in many cases, critical to success and profitability. However, despite the indisputable benefits that AI can bring in society and in any industrial activity, humans typically have little insight about AI itself and even less concerning the knowledge on how AI systems make any decisions or predictions due to the so-called “black-box” effect. This paper presents the XMANAI approach, that focuses on explainable AI models and processes, to mitigate such an effect and reinforce trust. The aim is to transform the manufacturing value chain with ‘glass box’ models that are explainable to a ‘human in the loop’ and produce value-based explanations for data scientists, data engineers and business experts.

Available for download here.

gLIME: A NEW GRAPHICAL METHODOLOGY FOR INTERPRETABLE MODEL-AGNOSTIC EXPLANATIONS

This paper contributes to the development of a novel graphical explainability tool that not only indicates the significant features of the model, but also reveals the conditional relationships between features and the inference capturing both the direct and indirect impact of features to the models’ decision. The proposed XAI methodology, termed as gLIME, provides graphical model-agnostic explanations either at the global (for the entire dataset) or the
local scale (for specific data points). It relies on a combination of local interpretable model-agnostic explanations (LIME) with graphical least absolute shrinkage and selection operator (GLASSO) producing undirected Gaussian graphical models. Regularization is adopted to shrink small partial correlation coefficients to zero providing sparser and more interpretable graphical explanations. Two well-known classification datasets (BIOPSY and OAI) were selected to confirm the superiority of gLIME over LIME in terms of both robustness and consistency/sensitivity over multiple permutations.
Specifically, gLIME accomplished increased stability over the two datasets with respect to features’ importance (76%-96% compared to 52%-77% using LIME). gLIME demonstrates a unique potential to extend the functionality of the current state-of-the-art in XAI by providing informative graphically given explanations that could unlock black boxes.

Available for download here.

PUBLIC DELIVERABLES:

D1.1: State of the Art Review in XMANAI Research Domains

D1.1 provides an overview over the Explainable Artificial Intelligence (XAI) domain and it is structured as a collection of existing methods and algorithms, that can be used as inspiration for developing the XMANAI methodology.
It presents the state-of-the-art on XAI and machine learning from a theoretical perspective, comparing different methods and tools. Furthermore, it includes the analysis of open-source solutions for Artificial Intelligence (AI) implementation and a look at XAI applications in the industry.
To provide a complete overview, the analysis is complemented by the exploration of human aspects in decision making and AI.

Available for download here.

D1.2: XMANAI Concept Detailing, Initial Requirements, Usage Scenarios and Draft MVP

This deliverable aims at bringing together the XMANAI concept by: (a) brainstorming on different user journeys in Explainable AI for business users, data scientists and data engineers, (b) eliciting the backlog of technical requirements and aligning them with the business requirements and the user journeys, (c) obtaining some early perspectives on the available manufacturing data (from the XMANAI demonstrators and open data sources), and (d) consolidating the Minimum Viable Product (MVP) that summarizes the expected features on which XMANAI shall focus (by the end of the project) for maximizing the expected added value to manufacturers while ensuring innovation from a scientific and technical perspective.

Available for download here.

D1.3: Updated Requirements and AI/Graph Analytics focused MVP

This deliverable is the updated version of D1.2 reflecting the perception of XMANAI concept at M18. It main focus is to provides an update on technical requirements and a MVP version that focus on the AI and Graph Analytics parts of the XMANAI platform. Its results act as input for technical workpackages such as WP3 “Core Artificial Intelligence Bundles for Algorithm and Lifecycle Management” and WP4 “Novel Artificial Intelligence Algorithms for Industrial Data Insights Generation”.

Available for download here.

D1.4: Final XMANAI MVP

This deliverable aims at bringing together the XMANAI concept by: (a) describing the different user journeys in Explainable AI for business users, data scientists and data engineers, (b) eliciting the backlog of technical requirements and aligning them with the business requirements and the user journeys, (c) consolidating the Minimum Viable Product (MVP) that summarizes the expected features on which XMANAI shall focus (by the end of the project) for maximizing the expected added value to manufacturers while ensuring innovation from a scientific and technical perspective; and (d) defining the technical aspects regarding the concept of “X-By-Design” (Explainable by Design). It takes into account the feedback and experiences gained through the development and integration activities of the alpha release of the XMANAI Platform, as well as the initial implementation activities of the XMANAI Demonstrators, in order to finalise the specifications of the XMANAI solution.

Available for download here.

D2.1: Asset Management Bundles Methods and System Designs 

Deliverable 2.1 deals with the architectural design for the asset management layer of the overall XMANAI Platform. This layer has the central task of importing/extracting data from external data sources (i.e. legacy and operational manufacturing systems) and ensuring data explainability in order to make them available for running AI pipelines.
In order to specify the asset management-related services, a detailed state-of-the-art analysis was performed.
On the one hand, this includes all necessary methods for the execution of all asset management and sharing processes. And on the other hand, current technologies for fulfilling the XMANAI requirements were examined.
Based on these results, a detailed architecture for the management of industrial assets and AI models was designed and accompanied by the selection of technologies and the elaboration of mock-ups to demonstrate the expected user interactions. 

Available for download here.

D2.2: XMANAI Asset Management Bundles – First Release

The current deliverable provides a detailed technical documentation for the first release of the components that constitute the Asset Management Bundles Methods and thus, constitutes a report on the activities performed within all WP2 tasks. For each of the components the documented information includes the implementation status of the first release, as well as the pending functionalities planned for the next release. Documentation of the API, the architecture and the technology stack that was employed for the development of the components are also reported, as well as screenshots of the provided features through user interfaces. This deliverable can be used as a guide for the technical users and the researchers that participate in the implementation of the XMANAI platform and the business users that can be informed of the provided functionalities and how to utilize them.

Available for download here.

D2.3: XMANAI Asset Management Bundles – Second Release

The current deliverable provides a detailed technical documentation for the final release of the components that constitute the Asset Management Bundles Methods and thus, constitutes a report on the activities performed within all WP2 tasks. For each of the components the provided information includes the description of the implemented functionalities, documentation of the API, the technology stack that was employed for the development of the components and license information. Additionally, it provides the screenshots of the provided features through user interfaces. This deliverable can be used as a guide for the technical users and the researchers that participate in the implementation of the XMANAI platform or want to re-use its components and the business users that can be informed of the provided functionalities and how to utilize them.

Available for download here.

D3.1: AI Bundles Methods and System Designs

This deliverable reports on the results produced through WP3 activities towards the delivery of the
XMANAI Artificial Intelligence (AI) bundles designs and methods. It presents insights gained through the landscape analysis on research dimensions and technical advancements in the broader scope of AI pipelines and reflects upon the positioning of the XMANAI solution targeting explainable AI pipelines in the manufacturing domain. In this scope, the role of data models is explored and relevant standards are studied. The deliverable reports on the development of the XMANAI data model and presents its early concepts, relationships, and foreseen lifecycle management mechanisms. Finally, the detailed WP3 architecture design is discussed and each of its components is specified, including designed functionalities and offered methods, considered technologies for its implementation and indicative mockups.

Available for download here.

D3.2: XMANAI AI Bundles – First Release

The current deliverable provides a detailed technical documentation for the first release of the components that constitute the XMANAI AI Bundles and thus constitutes a report on the activities performed within all WP3 tasks. For each of the components the documented information includes the implementation status of the first release, as well as the pending functionalities planned for the next release. Documentation of the API, the architecture and the technology stack that was employed for the development of the components are also reported, as well as screenshots of the provided features through user interfaces. This deliverable can be used as a guide for the technical users and the researchers that participate in the implementation of the XMANAI platform and the business users that can be informed of the provided functionalities and how to utilize them.

Available for download here.

D3.3: XMANAI AI Bundles – Second Release

The current deliverable provides a detailed technical documentation for the final release of the components that constitute the XMANAI AI Bundles and thus constitutes a report on the activities performed within all WP3 tasks. For each of the components, the information includes a description of the implemented functionalities, the documentation of its APIs and screenshots from its user interfaces. The architecture and the technology stack employed for the development of the components are reported, together with license information. Updates to the XMANAI data model are also reported and explained. This deliverable can be used as a guide for the technical users and the researchers that participate in the implementation of the XMANAI platform and the business users that can be informed of the provided functionalities and how to utilize them.

Available for download here.

D4.1: Draft Catalogue of XMANAI AI and Graph Machine Learning Models

This deliverable reports on the construction of the Draft catalogue of XMANAI AI and Graph ML models, as a collection of baseline algorithms and explanation methods that will be used to develop explainable solutions to address 4 generic manufacturing application scenarios, represented by the XMANAI demonstrators: i) Production Optimization, ii) Product Demand Forecasting, iii) Process/Product quality Optimization, and iv) Process Optimization and Semi-Autonomous Planning. A landscape analysis on the relevant application domains is conducted, followed by the technical description of the problems to be tackled and the identification of relevant data sources, leading to the selection of Hybrid and Graph baseline models that will populate the initial release of the XMANAI Explainable AI platform.

Available for download here.

D5.1: System Architecture, Bundles Placement Plan and APIs Design 

This deliverable provides the design of the XMANAI reference architecture by: (a) designating the
architecture blueprints across the tiers, services bundles, components and application perspectives, (b) designating the core workflows along the user journeys in Explainable AI for business users, data scientists and data engineers, for the interaction among the different XMANAI components, (b) defining the core functionalities, the mapping to the XMANAI technical requirements and MVP features, and the main interactions of the XMANAI components that shall be delivered in the XMANAI Centralized Cloud and OnPremise (Private Cloud) installations, (c) obtaining some early perspectives on the features of the XMANAI manufacturing apps and their alignment with the business requirements of the XMANAI demonstrators.

Available for download here.

D5.2: XMANAI Platform – Alpha Version

The deliverable provides a detailed accompanying report that presents the first release of the XMANAI platform, namely the XMANAI Platform Alpha version by: (a) documenting the updated architecture blueprints of the platform, highlighting the updates across the tiers, services bundles, components and application perspectives, (b) providing a detailed walkthrough of the delivered platform version from the user’s perspective, focusing on the usage of the core features and functionalities of the delivered version in order to perform the required operationsthat address the requirements of the XMANAI demonstrators and (c) documenting the key aspects of the performed integration process activities which are driven by a concrete integration plan formulated based on the adopted integration strategy and supported by a list of mature tools and techniques that allows the continuous integration and delivery of software artefacts as well as their evaluation and validation.

Available for download here.

D6.2: Project Evaluation Plan and First Round of Demonstrators Implementation Plan

D6.2 – “Project Verification and Validation Framework Definition” presents the XMANAI methodology and the Evaluation framework to assess the impact of the XMANAI platform and solutions on the demonstrators, at the end of the project. The document describes the questionnaire used for the assessment (stressing the role of the Explainability component in changing the current decision making process), it presents the steps to be performed to run the Evaluation Framework and it explains how to generate the final report, to measure the impact on the production and decision making processes, but also to verify if the technical and business expectation are satisfied. Additionally, it provides a preliminary overview of the demonstrators’ Implementation Plan and it describes the activities to be performed in next months by the demonstrators and technology providers to implement and adopt the XMANAI platform.

Available for download here.

D8.2: Project Website and Communication Channels Instantiation

D8.2 is the first result of XMANAI task T8.3 “Communication Activities and Publicity” which is responsible to construct the project’s website from the very early implementation stages. It also reports the set-up of all needed Web 2.0 and social channels that will be used during the project for communication. This document is accompanying the launch of the XMANAI website and Social Media channels which actually compose the deliverable.

Available for download here.

D8.3: 1st Dissemination and Communication and Plan for next period

D8.3 reports on all the WP8 activities performed during the 1st outreach phase (M1-M18) and provides an updated plan for all dissemination and communication activities that will be scheduled for the 2nd outreach phase (M19-M30)

Available for download here.

D8.4: 2nd Dissemination, Communication, Standardisation Report and Plan for next period

D8.4 reports on the WP8 accumulated activities, with a special emphasis on the ones performed during the 2nd period (M19-M30), and will provide an updated plan for all dissemination, standardisation and communication activities that will be scheduled for the 3rd reporting period (M31-M42).

Available for download here.

D9.2: Ethics and Data Management Plan

This document outlines the main elements of the data management policy with regard to all research datasets that will be generated during the pilots’ execution and the efficient management of publications will be agreed and followed by the Consortium, in accordance with the H2020 guidelines regarding Open Access to Scientific Publications and Research Data.

Available for download here.