Human Aspects in Decision making and AI

Human aspects in decision-making and AI

Are we expecting a future where decisions will be made by machines and humans won’t have a say anymore? Do we think that in the next years Artificial Intelligence will be capable to make decisions more accurately than humans?

Generally speaking, any decision process (from the least to the most complicated one) follows four steps:

  1. Identification of the problem
  2. Collection of the available information
  3. Identification of alternatives
  4. Choice of the most suitable alternative

But according to the situation, each of them is run in a different way and requires a different amount of effort. Also, since several internal and external factors may influence the final choice, it is not possible to define the perfect way to lead a decision-making process. Consider for example the decision maker’s approach, who can be autocratic, consultative, or collaborative, running the process individually or asking for the team support (in a more or less balanced way); or consider external factors as the time pressure for making the decision or bias and personal opinion that may strongly affect the final choice.

Not to be underestimated, is the phase when information is collected and analyzed: a proper evaluation of the information is essential to come up with the most accurate solution.

So, if we focus on “structured decisions”, that are typically based on the analysis of lots of data, it comes out that when larger sets of information are at disposal, the decision-making process might turn out to be complex, in particular during the phase of collecting and filtering information with the aim of finding the most relevant one.

Hence, with an increasing amount of data and parameters to be taken into account, sometimes the help of Artificial Intelligence, able to speed up the selection and to find a hidden correlation, is quite essential. Indeed, AI facilitates decision-makers to collect effectively and act upon new sets of information that under other conditions, humans could not be able to observe; it can provide support with predictive analytics by creating new insights through probabilities and approaches to statistical inferences based on data.

However, Artificial Intelligence is not a synonym of the perfect decision since the automatic construction of knowledge representations is a challenging process, and machines find it difficult to interpret all types of knowledge. Indeed, it’s not a secret that humans tackle unpredictable situations better, since they are aware of many details about the environment around them (even if unconsciously). However, on the other side, they are more prone to error, mainly in the case of repetitive tasks.

Another issue that recently is the subject of discussion is the trustworthiness and the transparency of AI algorithm assisting humans in decision-making: given their black-box nature, for a non-expert user, it’s not easy to understand how the system processes and analyzes the input to provide specific output. In such a context, transparency, as “the availability of information about how a third party is acting”, is highly associated with explainability, as “the ability of a system to provide understandable explanations” and represents a crucial ethical standard for Trustworthy AI.

According to the I5.0 paradigm, recently described also by the European Commission in a paper[1] published in January 2021, in the future the adoption of Artificial Intelligence will be conceived to satisfy the human-centric vision. It means that AI and machines will be designed to complement and augment human capabilities, not to replace them. The goal for a perfect human-machine match is to identify which human features can’t be embodied by machines, in order to enhance and concentrate on them, and to delegate those where workers can be effectively replaced. Concerning the decision-making process, there are a lot of tasks that can be supported (and enhanced!) by the use of AI models, but keeping in mind the value of human-centricity, where artificial intelligence must be accepted, understood, and trusted by humans.