XMANAI partner spotlight – Politecnico di Milano
XMANAI partner spotlight – Politecnico di Milano Q: What is your organisation’s role in XMANAI? A: POLIMI is leading WP1 (Scientific Foundations) especially addressing Human Factors in the interaction with AI-based Autonomous Systems. The Collaborative Intelligence paradigm from Harvard Business Research will be applied, modelled in the industrial pilots, so that to become validated as […]
Education role in AI technology implementation in industry
Artificial Intelligence has a crucial role in the digital transformation roadmap of traditional manufacturing companies as if from one side it may bring great step improvement in several areas, on the other side it is probably the most difficult technology to be implemented in a sustainable way, due to the lack of knowledge and to the natural negative reaction to adoption that this type of technology generates in the involved people.
XMANAI partner spotlight – Fraunhofer FOKUS
Fraunhofer FOKUS is the leader of the working package “Asset Management Bundles Methods and System Designs”. In this working package, management and sharing methods are defined and prototypically implemented for the assets. The assets mentioned are industrial data as well as AI models and analyses based on these data.
AI Requirements for Manufacturing in the XMANAI Project
The XMANAI project is working to provide the tools to navigate the Artificial Intelligence (AI)’s “transparency paradox”, designing, developing and deploying a novel Explainable AI Platform powered by explainable AI models that inspire trust, augment human cognition and solve concrete manufacturing problems with value-based explanations.
Have you ever heard about Graphical Neural Networks?
XMANAI is one of the few European projects focusing on eXplainable Artificial Intelligence (XAI) methodology. However, XMANAI doesn’t use only XAI to analyze deeper and efficient the data but also introduces many other novel methodologies to provide better data insights. Have you ever listen about Graphical Neural Networks?
A brief overview of XAI Landscape
The field of explainable AI is thriving with interesting solutions, showing the potential to address almost any task in any given setting. This outburst of methods and models comes in response to interpretability being identified as one of the key factors for AI solutions to be trusted and widely deployed.
XMANAI partner spotlight – TXT e-solutions
TXT is the coordinator of the project and the exploitation leader carrying inside XMANAI its competence about industry 4.0 from Industrial & Automotive business unit and the technical competence of an end-to-end Large Enterprise provider of consultancy, software services and solutions, supporting the digital transformation of customers’ products and core processes.
Outlook on the XMANAI industrial demonstration cases
The key goal of XMANAI project is the Explainable Artificial Intelligence, a type of AI that aims to address how black box decisions of AI systems are made, inspecting and attempting to understand the steps and models involved in decision making to increase human trust.
Why we should make use of AI in the manufacturing industry?
Today there is high pressure on this industry to be as competitive as possible, in terms of automating processes, optimizing cycle times, reducing unwanted downtime and increasing quality among others. To achieve this process improvement many companies are evolving to what is called Industry 4.0 or Smart Factories.
Human Aspects in Decision making and AI
Are we expecting a future where decisions will be made by machines and humans won’t have a say anymore? Do we think that in the next years Artificial Intelligence will be capable to make decisions more accurately than humans?
Moving from “black box” to “glass box” Artificial Intelligence in Manufacturing, with XMANAI
Despite the indisputable benefits that AI can bring in society and in any industrial activity, humans typically have little insight about AI itself and even less concerning the knowledge on how AI systems make any decisions or predictions due to the so-called “black-box effect”. Can you explain why and how your system generated a specific decision?