Have you ever heard about Graphical Neural Networks?
XMANAI is one of the few European projects focusing on eXplainable Artificial Intelligence (XAI) methodology. However, XMANAI doesn’t use only XAI to analyze deeper and efficient the data but also introduces many other novel methodologies to provide better data insights. Have you ever heard about Graphical Neural Networks?
A Graphical Neural Network (GNN) is introduced as a new terminology to the public. Before describing GNN, it is important to clarify that graphs are everywhere since they can explain in a graphical way different problems. For example, social networks such as Facebook in which people represent nodes and the connections represent their relationships. In GNN, nodes can be word vectors when using knowledge bases or they can be pixel values when working with images etc.
The starting point of GNN is to initiate a vector representation of each node for the observed problem. Next, the neighbourhood (the nodes that are directly connected to the focal node) of each node must also be considered. In such a way, we can update our initial vector using the information in the graph structure obtaining newly updated vectors that represent the problem better. Consequently, the aim of GNN is not to show the initial representation of the nodes but how we can aggregate the representation of the neighbours in order to obtain a better representation than the original. However, the major question is how can we improve this representation?
First things first! To compute the new representation, it is important to keep some information from the old vector which this vector must be multiplied by a weight matrix. Since we don’t know this weight matrix, Neural Network (NN) decide on it. We train this matrix on our given data such as node labels, edge labels or other predictions from the graph structures. Using these inputs, the NN makes the right predictions given the new representations and therefore, learns perfectly the weight matrix. In addition, we can use the neighbourhood representation (the information of the neighbours) which must be aggregated in a way that NN decides how to transform them (FYI, these transformation functions are coming with many “flavours”). Applying the formula iteratively (at each time step), we obtain a better graph representation. These new informed and refined representations should be suitable to be used for new tasks such as link prediction, node classification, entire graph classification and other tasks.
To conclude, GNNs have been successfully applied in many domains. This is so reasonable considering what we discuss at the beginning of this post; every problem/system can be represented as a graph structure! Since we have a graph to describe a problem, GNNs can be applied to solve the observed problem.
For more information read this article.