Skip to main content
ARS Home » Research » Publications at this Location » Publication #392199

Research Project: Towards Resilient Agricultural Systems to Enhance Water Availability, Quality, and Other Ecosystem Services under Changing Climate and Land Use

Location: Location not imported yet.

Title: A review on interpretable and explainable artificial intelligence in hydroclimatic applications

Author
item BASAG AOG LU, HAKUN - University Of Texas At San Antonio
item CHAKRABORTY, DEBADITYA - University Of Texas At San Antonio
item LAGO, CESAR DO - University Of Texas At San Antonio
item GUTIERREZ, LILIANNA - University Of Texas At San Antonio
item SAHINLI, ARIF - Ankara University Of Turkey
item GIACOMONI, MARCIO - University Of Texas At San Antonio
item FURL, CHAD - University Of Texas At San Antonio
item MIRCHI, ALI - Oklahoma State University
item MORIASI, DANIEL - US Department Of Agriculture (USDA)
item SENGOR, SEVINC - Middle East Technical University

Submitted to: Water
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 4/7/2022
Publication Date: 4/11/2022
Citation: Basag Aog Lu, H., Chakraborty, D., Lago, C., Gutierrez, L., Sahinli, A., Giacomoni, M., Furl, C., Mirchi, A., Moriasi, D., Sengor, S.S. 2022. A review on interpretable and explainable artificial intelligence in hydroclimatic applications. Water. 14(8):1230. https://doi.org/10.3390/w14081230.
DOI: https://doi.org/10.3390/w14081230

Interpretive Summary: Artificial intelligence (AI) models rely on computer algorithms that enable a computer software to mimic human behavior by learning from past data. AI models whose results can be understood by humans are called Interpretable AI (IAI) and eXplainable AI (XAI) models. This paper presents the findings of a review of applications of IAI and XAI models on hydrology and climate studies. The findings indicate that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also discusses the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydrology and climate applications. The proposed XAI framework, once developed and validated, will contribute to research goals of the Conservation Effects Assessment Project and Long-Term Agroecosystem Research network USDA initiatives.

Technical Abstract: This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.