XMANAI is one of the few European projects focusing on eXplainable Artificial Intelligence (XAI) methodology. However, XMANAI doesn’t use only XAI to analyze deeper and efficient the data but also introduces many other novel methodologies to provide better data insights. Have you ever listen about Graphical Neural Networks?
The field of explainable AI is thriving with interesting solutions, showing the potential to address almost any task in any given setting. This outburst of methods and models comes in response to interpretability being identified as one of the key factors for AI solutions to be trusted and widely deployed.
TXT is the coordinator of the project and the exploitation leader carrying inside XMANAI its competence about industry 4.0 from Industrial & Automotive business unit and the technical competence of an end-to-end Large Enterprise provider of consultancy, software services and solutions, supporting the digital transformation of customers’ products and core processes.
The key goal of XMANAI project is the Explainable Artificial Intelligence, a type of AI that aims to address how black box decisions of AI systems are made, inspecting and attempting to understand the steps and models involved in decision making to increase human trust.
Today there is high pressure on this industry to be as competitive as possible, in terms of automating processes, optimizing cycle times, reducing unwanted downtime and increasing quality among others. To achieve this process improvement many companies are evolving to what is called Industry 4.0 or Smart Factories.
Are we expecting a future where decisions will be made by machines and humans won’t have a say anymore? Do we think that in the next years Artificial Intelligence will be capable to make decisions more accurately than humans?
Despite the indisputable benefits that AI can bring in society and in any industrial activity, humans typically have little insight about AI itself and even less concerning the knowledge on how AI systems make any decisions or predictions due to the so-called “black-box effect”. Can you explain why and how your system generated a specific decision?