Location: Genetics and Sustainable Agriculture Research
Project Number: 6064-21600-001-012-S
Project Type: Non-Assistance Cooperative Agreement
Start Date: Jul 1, 2024
End Date: Sep 30, 2025
Objective:
1. Develop deep learning models that can learn surface soil moisture (SM) distribution prediction using unmanned aerial vehicle (UAV) multi-sensory data fusion.
2. Develop a prototype digital product for near real-time SM mapping that will run standalone at the edge onboard the UAV, transfer and visualize the generated SM map to the users.
Approach:
For Objective 1: We will develop a DL framework with convolutional and fully connected neural network layers for SM mapping that can utilize multiple imageries jointly with other physical and microwave data and calculate features relevant to SM. During model development, site, and time-independent cross-validation methods will be used for better model generalization and performance evaluation.
We will utilize already collected data. However, data from UAV sensors or in-situ observations generally cannot be directly used and needs to be prepared to be used for DL models. For this study, we will use all the data we regularly collect from the R. R. Foil Plant Science Research Center, Mississippi State University (MSU), during 2020-2023. The study field was organized with a split-plot arrangement and was planted with corn and cotton as the main crops. The UAV-based dataset contains visual, multispectral, and hyperspectral camera images, LIDAR point cloud data, microwave RF remote sensing observations, flight parameters, and in-situ soil moisture measurements. We will define data quality control mechanisms and generate data quality flags before model training.
For Objective 2: This task will focus on developing an onboard DL system and user-friendly UAV-based SM visualization. A mini onboard computer will acquire sensor data defined in objective 1. After preprocessing, multi-sensory inputs will be applied to the pre-trained DL model (objective 1) for SM mapping of the field. Generated georeferenced SM maps will be transferred to the user's smartphone or computer for visualization. Assigned SM maps can be visualized via popular software such as Google Earth or GIS.
For all the above tasks, SCINet computing resources will be utilized for performing deep learning computations and data storage.