The XMANAI reference architecture aims at providing the basis for the detailed specification, development and integration activities of the overall XMANAI platform along its Data and AI related Services Bundles, its XAI Algoritmms and Models Catalogue, and the different Manufacturing Apps.
In the new digital world where tremendous amount of information is generated from the increasing number of data sources, the emerging need for security and privacy techniques, methods and solutions has been raised.
Name: Dr Serafeim Moustakidis
Job title: Co-founder / CTO
Bio: Dr Serafeim Moustakidis has wide experience in computational intelligence, machine learning and data processing with more than 12 years of research experience in various fields.
In XMANAI, the exploration of the Explainable AI landscape, the analysis of the business requirements from its 4 demonstrators and the elicitation of the technical requirements have culminated with the definition of the Minimum Viable Product (MVP).
The implementation of Artificial Intelligence (AI) in the manufacturing domain enables higher production efficiency, and outstanding performance.
The management of assets in XMANAI should meet a number of critical requirements. One of them is the explainability of data, since Explainable AI is the main objective in the project.
Name: Dr. David Monzo
Job Title: Director of AI
Organization: Tyris AI:
Bio: Dr. David Monzo is the technical director and main investigator at Tyris AI.
XMANAI partner spotlight – Politecnico di Milano Q: What is your organisation’s role in XMANAI? A: POLIMI is leading WP1 (Scientific Foundations) especially addressing Human Factors in the interaction with AI-based Autonomous Systems. The Collaborative Intelligence paradigm from Harvard Business Research will be applied, modelled in the industrial pilots, so that to become validated as […]
Artificial Intelligence has a crucial role in the digital transformation roadmap of traditional manufacturing companies as if from one side it may bring great step improvement in several areas, on the other side it is probably the most difficult technology to be implemented in a sustainable way, due to the lack of knowledge and to the natural negative reaction to adoption that this type of technology generates in the involved people.
Fraunhofer FOKUS is the leader of the working package “Asset Management Bundles Methods and System Designs”. In this working package, management and sharing methods are defined and prototypically implemented for the assets. The assets mentioned are industrial data as well as AI models and analyses based on these data.
The XMANAI project is working to provide the tools to navigate the Artificial Intelligence (AI)’s “transparency paradox”, designing, developing and deploying a novel Explainable AI Platform powered by explainable AI models that inspire trust, augment human cognition and solve concrete manufacturing problems with value-based explanations.
XMANAI is one of the few European projects focusing on eXplainable Artificial Intelligence (XAI) methodology. However, XMANAI doesn’t use only XAI to analyze deeper and efficient the data but also introduces many other novel methodologies to provide better data insights. Have you ever listen about Graphical Neural Networks?
The field of explainable AI is thriving with interesting solutions, showing the potential to address almost any task in any given setting. This outburst of methods and models comes in response to interpretability being identified as one of the key factors for AI solutions to be trusted and widely deployed.
TXT is the coordinator of the project and the exploitation leader carrying inside XMANAI its competence about industry 4.0 from Industrial & Automotive business unit and the technical competence of an end-to-end Large Enterprise provider of consultancy, software services and solutions, supporting the digital transformation of customers’ products and core processes.
The key goal of XMANAI project is the Explainable Artificial Intelligence, a type of AI that aims to address how black box decisions of AI systems are made, inspecting and attempting to understand the steps and models involved in decision making to increase human trust.