This article discusses XElemNet, a framework developed to enhance the interpretability of deep learning models, specifically ElemNet, which predicts material properties based on elemental compositions. While deep learning models like ElemNet offer high accuracy in material science, they often function as “black boxes,” making their internal processes opaque. XElemNet addresses this by applying explainable AI (XAI) techniques to make ElemNet more transparent. The study focuses on post-hoc analysis, including convex hull and stability analysis of binary and ternary compounds, to explain the predictions of ElemNet. The framework uses decision trees as surrogate models to highlight the importance of specific features, such as the role of highly electronegative elements, in predicting formation energies. Overall, XElemNet demonstrates how explainability can improve trust and understanding in deep learning models used for materials discovery.
Please continue reading the full article under the link below:
https://www.nature.com/articles/s41598-024-76535-2
Please consult also the Quantum Server Marketplace platform for the outsourcing of computational science R&D projects to external expert consultants through remote collaborations:
#materials #materialsscience #materialsengineering #computationalchemistry #modelling #chemistry #researchanddevelopment #research #MaterialsSquare #ComputationalChemistry #Tutorial #DFT #simulationsoftware #simulation