Knowledge Transfer and Replication Roadmap

The present post provides recommendations and guidelines for porting scaling up and replication the XMANAI concept at larger scales and for transferring knowledge of XMANAI and the developed assets and infrastructure to other manufacturing sectors and domains. It offers a structured framework to guide an organization’s thought processes during the development of an XAI strategy, from a multidisciplinary perspective. The shift from a black-box approach to a transparent one based on explainability has created an interesting number of opportunities for businesses and organizations. When deciding on which opportunities to implement and how to design an XAI solution, however, it is important to consider the requirements that come along with such a project. XAI applications come with their own unique challenges that requires a thoughtful analysis and consistent implementation plan. Based on XMANAI experience and challenges faced in the project activities performance, the guidelines here provided are intended for the replication of these three KERs in other contexts. For this reason, the topics addressed concern the conditions required for ensuring an effective XAI implementation, starting from the following questions: which must/should have features are needed for the successful adoption? Are there any pre-conditions/pre-requirements? How can this asset be applied to other domains and with what impact? What methodological or technical know-how could be exported outside the project and in which sector? What could be the best scalability path and approach? 

This analysis is, thus, performed considering all three KERs focusing on their area of expertise where the knowledge transfer aspect is more highlighted in the XAI Take-up Methodology while the XAI Models Catalogue and XAI Platform focus more on the replication side.    

Recommendations and Guidelines for XAI Platform

The XMANAI Platform has been developed and delivered as a first-of-its-kind Explainable AI-centered Platform addressing the needs of the different stakeholders in the manufacturing domain while representing a core XMANAI KER. With the X-By-Design approach being a core differentiating aspect, the XMANAI Platform has been equipped with the necessary XAI-related functionalities addressed to business users and technical users, but also to more data technology-knowledgeable users (i.e. data scientists, data engineers). The XMANAI Platform practically provides the underlying infrastructure where the XAI models are trained and evaluated and XAI pipelines are configured and scheduled for execution in production, according to each stakeholder’s needs. To deliver differentiated views per type of end user, custom XMANAI Manufacturing Apps are created in order to serve the appropriate predictions and explanations in a user-friendly and intuitive manner and bring tangible business benefits/impact.  

The next sections present the main recommendations that emerge from the development and use of the XMANAI Platform by different users across the project activities.

1. Start from a Concrete Problem

For any stakeholder to leverage the benefits and added value that the XMANAI Platform brings, a concrete problem needs to be defined from the business perspective and a collaboration mentality across their organization (i.e. business users, technical users, data scientists, data engineers) needs to be cultivated. The scoping and problem definition includes but is not limited to: the identification of the business need, as well as the description of the as-is vs to-be situation, the challenges to solve and the restrictions to overcome, the data availability status and the profiling of the data needed, and the anticipated business impact. 

In particular, the following “pentalogue” needs to be followed by its different users (business users, data scientists, data engineers) to capitalize on their experience and bring the best of different business-data-technology worlds in the XMANAI Platform: 

  • Guideline 1. Prepare your data offline. The relevant data must be extracted from your back-end systems as batch files in CSV format or exposed through a well-documented API endpoint. If you opt for the batch file option, you need to pay particular attention to the delimiters, encoding, and overall file settings to comply with the instructions provided in XMANAI.
  • Guideline 2. Understand your data. To import your data to the XMANAI Platform, you should have an inherent understanding of the business meaning behind each field. Knowing the semantics and data types of your data (either directly or through collaboration) is a precondition for properly mapping them to the XMANAI data model and achieving “data explainability” within your organization. 
  • Guideline 3. Experiment with your data. The XMANAI Platform offers a plethora of functionalities to explore your data through a notebook environment, train the AI models of your choice, and configure XAI pipelines. To ensure that “model explainability” is appropriately considered, you need to pay attention to selecting the appropriate explainability techniques for the ML/DL model you have selected. 
  • Guideline 4. Decide internally how you intend to leverage the XAI results. Since the explanations for a specific prediction are often fit for purpose depending on the target user (e.g. his/her background, knowledge, context of use), achieving “results explainability” requires engagement of the end users and elicitation of their actual needs to provide them with custom user interfaces that will assist them in their everyday work. In principle, taking into consideration the acquired feedback and ideas, the data scientists and the business users/domain experts should anticipate many iterations to ensure the right hypotheses and interpretations are in place until the results (predictions and explanations) meet the expectations of the to-be situation. 
  • Guideline 5. Invest effort on maintaining your data, models, and pipelines. Ingesting your data once in the XMANAI Platform will undoubtedly provide you with certain new insights into your operations, but you need to keep your data up-to-date and with high quality in order to really reap the benefits you can gain in a production environment. In addition, you need to consistently monitor and troubleshoot your XAI pipelines, and constantly observe the performance and security aspects of your ML/DL models to trigger the appropriate processes under the hood (e.g. for retraining, for effectively handling any data poisoning attempts, etc).
Figure 1 XMANAI Platform: Quick actions & menu in accordance with the relevant XAI workflow

2. Leave the Deployment Decision up to each Stakeholder

For the XMANAI Platform to be adopted in domains with increased security concerns (like the ones raised in manufacturing during the requirements collection phase in the first months of the XMANAI project’s implementation), the stakeholders need to have the option for their data to always remain on-premise and all necessary processing to be performed “locally” (on private cloud or server infrastructures). To assuage such security concerns, the XMANAI Platform is modular by design and can operate in a mode where an On-Premise Environment is installed in the stakeholder’s premises or private cloud and efficiently interacts with the XMANAI Cloud Platform for coordination/orchestration of all services that cannot operate in a federated mode.  

Figure 2 XMANAI Architecture – Centralized & On-Premise Installations Interplay

It needs to be noted that irrespectively of the deployment option that a stakeholder has adopted, it is crucial to appropriately onboard all users to the XMANAI Platform functionalities through (a) appropriate training sessions to provide them with detailed how-to guidance, (b) constant support channels (e.g. live communication channels in slack) for getting direct support, documenting and tracking any issue they may come across.

3. Early Plan the Integration Activities

The XMANAI Platform has successfully integrated over 20 services across its 8 layers/bundles in its centralized cloud deployment and/or its on-premise installation. To ensure that the integration activities proceed as smoothly as possible, a solid integration approach with well-defined CI/CD flows and tools as well as a concrete integration plan including well-defined integration tests, need to be in place from the early beginning (once the draft architecture is available) as depicted in the following figure.  

Figure 3 XMANAI Platform Integration Activities Planning & Monitoring

In addition, a component/service shall be considered to be ready for integration only if certain preconditions apply: (a) unit tests have been described and run, ensuring a code coverage over a pre-defined threshold, (b) the APIs that are exposed are appropriately documented in swagger, (c) the deployment needs (in terms of required resources) and any (migration) scripts are defined.  

4. Pave a Scalability Path

Although the XMANAI Platform has been used to address different manufacturing problems for the 4 XMANAI demonstrators, it is extensible by design for stakeholders to upload new data, experiment with new ML/DL models and explainability techniques, and configure the XAI pipelines they want for any new manufacturing problems at hand. Depending on the volume of the data and the underlying processing needs (e.g. frequency of executions, required memory/CPU per execution), the available resources may need to be scaled yet this does not require any change on platform side.  

In addition, thanks to the methodology that the XMANAI project worked on since its beginning (in the XMANAI Deliverables D1.2-D1.3-D1.4), the design of the XMANAI Platform has considered the needs of data scientists and data engineers in general, beyond the manufacturing domain. To this end, the replicability of the XMANAI Platform to other industries/domains is feasible with certain adaptations that mostly focus on: (a) the inclusion of the appropriate industry-specific data model(s), and (b) the inclusion of additional XAI models in the XAI Models Catalogue, besides the custom apps/dashboards that really depend on the end user needs.

Recommendations and Guidelines for XAI Take-up Methodology

The XAI Take-up Methodology, as one of the KERs of the project, has been the result of the challenges faced during the project activities about how to design XAI interfaces, and how to improve the effectiveness of XAI for end-users not experts about XAI, to support the adoption and appropriation of XAI technology.  

This methodology is designed as a whole, integrating several steps and different tools to use in different moments. It is a coherent approach, which focuses on obtaining the maximum value from implementing an XAI system, supporting the designing of the optimal solution for the specific context. Nevertheless, the different steps can be taken also separately for limited results, such as in the case of applying the visualisation prioritization tool, that can be helpful in specific case. However, to maximize the impact, and support a sustainable XAI take-up process, is recommended to apply and follow the overall process. 

From this experience, several recommendations and guidelines can be defined to support the replication and adoption of the XAI Take-up Methodology in other contexts, which are presented in the following sections. There are no strictly technical preconditions for the adoption of such methodology, as well as organisational preconditions. It is not specifically designed for the manufacturing domain, but it could be used in any other domain and context where there is a need for introducing XAI systems, like for example in the cases of energy sector, transportation, and logistics.  

1. User research activities and definition of user needs

The involvement of end-users is essential from the very beginning of the process. Engage end-users in the design and evaluation process to ensure that the explainability approach and the XAI interfaces meet their needs and expectations. An iterative approach is essential to achieve a positive result. This approach applies to other domains and can be leveraged using techniques and tools proper from user research. Gather feedback through user testing, surveys, workshops, and interviews, and iterate on the design based on user input to continually improve usability and effectiveness, and present those results using artifacts such as personas, user journey maps, and other user research methods.

Figure 4 Example of User Journey built in a workshop with an XMANAI demonstrator

The organisation that wants to adopt and use this methodology should prepare the activities being clear about the main objectives, and about the effort needed, also from the users’ side. It should be clear for them what they could gain from a successful integration of XAI, and how their feedback and knowledge is valuable for an effective design and implementation. Also, to support the adoption and the use of the XAI application, educational resources should be considered to be provided to users, with information about AI concepts and terminology, specific training sessions could be part of the users’ participation process.  

2. Explainability requirements and prioritisation

Having involved users, and a clear understanding of their needs it does not mean that it is clear how explainability should be implemented. As a result of the project activities has been developed an online tool (UXAI tool1) which synthetize the different potential types of explainability visualisation to use to address the main challenges faced in the XMANAI demonstrators. A generalization has been made, so that others that want to adopt and face similar challenges can explore this knowledge. In this case, the adoption is simplified for other organisations with similar contexts and challenges to the ones developed in XMANAI, while for others it should be developed and addressed from scratch. An analysis of the literature and a review of other experiences should be needed if there is not a match with the UXAI tool results. In this case, it is relevant to consider the context in which the XAI system wants to be adopted. Provide explanations that are relevant to the user’s domain knowledge, the task at hand, and the specific instance being analysed. Users may require different levels of detail depending on their expertise and context, so it is relevant to tailor explanation types to match the user’s expertise level and information needs.  

Figure 5 XMANAI UXAI tool homepage.

To support the mapping of the explainability requirements and the prioritisation of the visualisation types, as part of the XAI Tak-up Methodology, a specific tool has been developed (Grandi et. al., 2024)xix. It has been tested within the XMANAI demonstrators but could be applied to other domains and contexts as well. With such a tool, a more objective evaluation can support the decision-making process about which requirements to prioritize, and which types it is the best for the specific requirement.  

3. Interfaces prototyping

The final step of the XAI Take-up Methodology is the prototyping of the interfaces according to the defined requirements and XAI visualisation. In an iterative process, end-users are involved in all the steps, but here is where the results of the process are tested with them. Specific tutorials and contextual help should be considered, it is important to provide users with the maximum level of interpretability of the results according to their level of knowledge, and experience. In the end, the XAI system is a tool that must support the users in their tasks, helping in the decision-making process. For this reason, having interactive visualizations and tooltips can enhance the user’s understanding of complex XAI models.

An important aspect to consider when designing the interfaces and the human-XAI interaction it is to provide mechanisms for users to give feedback on explanations of results. Thus, allowing users to indicate whether they found the explanation helpful or if they need additional information. This feedback can be use, in a feedback loop, to improve the quality and relevance of future explanations.  

Recommendations and Guidelines for XAI Models Catalogue

The XAI Models Catalogue represents a direct outcome of the XMANAI project’s journey, encapsulating the culmination of technical breakthroughs and their practical implementation across the four different XMANAI demonstrators. It encompasses a curated selection of AI algorithms paired with appropriate Explainability Tools, tailored to address the specific needs of the manufacturing sector. To ensure the efficacy of this Key Exploitable Result (KER), a dedicated working methodology was meticulously devised and applied throughout the project’s duration. As a result of this methodology, we have gathered a set of guidelines and best practices that offer valuable insights for potential replication in alternative contexts or adaptation to different industrial sectors beyond manufacturing. 

The subsequent sections summarize the primary recommendations derived from insights garnered through the XMANAI initiative. These guidelines serve as a roadmap for developing new XAI strategies, encompassing both the augmentation of the XMANAI Models Catalogue with additional AI models and Explainability tools for diverse manufacturing use cases, and the proficient application of these XAI algorithms in varied scenarios. 

1. Methodology for populating the Catalogue

Creating a comprehensive Catalogue of XAI models tailored to the specific requirements of manufacturing processes involves more than simply suggesting algorithms. It needs the adoption of an XAI-by-design approach, where the imperative of providing rationale drives the selection of adequate combinations of accurate AI systems along with suitable explainability tools. Moreover, these tools, in turn, dictate the types of AI models that can effectively operate alongside them. This approach ensures that the chosen models not only generate accurate predictions but also offer transparent insights into their decision-making processes, aligning closely with the needs of industrial applications. 

To achieve this objective, a two-fold strategy was devised during the execution of XMANAI. In the initial phase, a meticulous assessment of the requirements of the demonstrators was conducted to identify the fundamental models and corresponding explainability tools needed. This involved a comprehensive analysis of the specific challenges and intricacies inherent in manufacturing processes, ensuring that the selected tools were capable of determining the rationale behind AI-driven decisions effectively. Subsequently, a curated selection of these base models and explainability tools was built, considering factors such as the availability of appropriate data sources, and their suitability for addressing the identified use cases. The baseline models where then grouped and classified based on their category, AI algorithm family, and their explainability type, so that an effective search of models could be performed. 

In the second phase, the focus shifted towards the training of the selected models based on the available data and their optimal alignment with the predefined use cases. This involved refining the models to ensure that they could effectively leverage the available data to generate meaningful insights and actionable recommendations for process optimization. By adopting this systematic approach, the Catalogue of Models was populated with a curated selection of AI models and explainability tools that not only met the immediate needs of the demonstrators but also laid the foundation for future advancements in XAI strategies within the manufacturing domain or even could be useful in a cross-domain scenario.

2. Technical structure of the XAI Models Catalogue

One of the primary insights gained from the project revolves around the effective construction of an XAI Models Catalogue and its deployment within a production environment, such as the XMANAI Platform. This involves designing and establishing a robust architecture to facilitate seamless interaction between AI algorithms and the explanatory layer overlaying them. 

The first lesson learned is that the Catalogue cannot function as an isolated module; instead, it needs to operate as a component with distributed storage. Consequently, various assets within the Catalogue are managed by distinct components, encompassing both sets of Hybrid AI and Graph ML models, as well as Explanation Tools. Within the XMANAI platform, integration of the Catalogue was achieved through a sophisticated structure. Two specific components were developed to manage these interactions: the XAI Model Engineering Engine (XMEE) and the XAI Model Explanations Engine (XMXE). The XMEE oversees the AI assets of the Catalogue, providing access to lists of trained models and their metadata while serving as a dedicated communication interface with other components of the XAI Model Lifecycle Management Services layer. Concurrently, the XMXE configures the explanatory information generated by Explainability Tools alongside predictions from Machine Learning and Deep Learning models. The configuration of explainability information, which complements predictions, is predominantly conducted during training or inference within the XMEE, thus fostering high interconnectivity between these two components. These interactions can be seen in the following figure.

Figure 7 XAI Model Catalogue Platform Interaction Scheme

As a result, our advice to replicate the Catalogue for broader and diverse scenarios beyond XMANAI would be to consider a similar technical structure approach, meticulously devising adaptations of these components to be implemented. 

As a result, our advice to replicate the Catalogue for broader and diverse scenarios beyond XMANAI would be to consider a similar technical structure approach, meticulously devising adaptations of these components to be implemented. 

3. Methodology for validating results from XAI models

By its very nature, no Machine Learning or Deep Learning trained model can achieve absolute efficiency. Consequently, after model training, the model’s performance is evaluated through a validation procedure (namely, the model validation) aimed at selecting the optimal model with the best performance. Numerous studies have addressed the best practices for conducting AI model validations across various scenarios. 

However, XMANAI has brought to light a validation-related challenge previously unaddressed: the necessity for a proper methodology to validate XAI models in manufacturing scenarios. This challenge is threefold, arising from the experiences gained through the four XMANAI demonstrators. It encompasses the importance of both accurate predictions from AI models and the valuable insights provided by Explainability Tools, all while considering the business perspective within the manufacturing domain. This new methodology emphasizes a human-centered approach, where end users ultimately determine whether an XAI model meets their needs or not. 

During XMANAI, the devised validation methodology for the Catalogue of XAI Models has integrated a blend of objective metrics, such as classical ML validation scores (Accuracy, Precision, Recall, RMSE, R2, etc.), to assess the alignment of model predictions with observed data. Additionally, subjective metrics derived from personal evaluations and questionnaires have been employed to evaluate the effectiveness and interpretability of results. These approaches have been complemented by a novel Business Value Validation Procedure designed to assess XAI techniques based on the performance of key performance indicators (KPIs) critical for Industry 4.0 systems. The following figure shows a diagram where the connections of all these assets are tangible in a manufacturing scenario. 

Figure 9 Graph visualization of a potential Business Value Validation Procedure in a manufacturing scenario

Consequently, the methodologies developed and explored within XMANAI hold potential validity for the extrapolation of the Catalogue of XAI Models to other verticals beyond manufacturing.