In the pursuit of explainable artificial intelligent (XAI), interpretable prediction has been a crucial issue lately. Especially interpretable prediction is important in the field of biomedical informatics because the domain experts such as clinicians or new drug-development researchers can trust the outcomes of the machine learning models when valid grounds are supported
In this study, we proposed a novel Graph Feature Attention Network (GFAN) for enhanced interpretability. The GFAN model is applicable for node classification problems on graph data. At the same time, it can provide explanations for the predictions in forms of sub graph and node features based on the wrapping feature selection method. Experiments with benchmark datasets validated interpretability of the GFAN and demonstrated competitive performance with the graph attention network in a previous work.
In this research, GFAN was applied to a drug interaction network to deal with the polypharmacy side effects prediction problem. Polypharmacy side effects should be carefully considered for new drug development. However, considering all the complex drug–drug interactions that cause polypharmacy side effects is challenging. Recently, graph neural network (GNN) models have handled these complex interactions successfully and shown great predictive performance. Nevertheless, the GNN models have difficulty providing intelligible factors of the prediction with biomedical and pharmaceutical domain experts. Experimental results showed that GFAN can provide target genes for resulting polypharmacy side effects prediction so that the domain experts convince the prediction results.