Location: Plant, Soil and Nutrition Research
Title: Synergizing Proximal Remote Sensing Modalities for Enhanced Prediction of Key Agronomic Crop TraitsAuthor
FARMER, ERIN - Cornell University | |
MICHAEL, PETER - Cornell University | |
YAN, RUYU - Cornell University | |
Lepak, Nicholas | |
ROMAY, M CINTA - Cornell University | |
Buckler, Edward - Ed | |
SNAVELY, NOAH - Cornell University | |
SUN, YING - Cornell University | |
GAGE, JOSEPH - North Carolina State University | |
DAVIS, ABE - Cornell University | |
GORE, MICHAEL - Cornell University |
Submitted to: North American Plant Phenotyping Network Meeting
Publication Type: Other Publication Acceptance Date: 10/31/2023 Publication Date: 10/31/2023 Citation: Farmer, E., Michael, P., Yan, R., Lepak, N.K., Romay, M., Buckler Iv, E.S., Snavely, N., Sun, Y., Gage, J.L., Davis, A., Gore, M.A. 2023. Synergizing Proximal Remote Sensing Modalities for Enhanced Prediction of Key Agronomic Crop Traits. North American Plant Phenotyping Network Meeting. https://doi.org/10.22541/essoar.169871639.93497349/v1. DOI: https://doi.org/10.22541/essoar.169871639.93497349/v1 Interpretive Summary: Recent advancements in remote sensing technology have significantly enhanced our ability to collect detailed spatial and temporal agricultural data, improving digital farming practices. Our research focuses on combining data from multispectral images (MSIs) collected by drones and lidar scans from ground vehicles to better predict important crop traits, such as yield. By using deep learning techniques, specifically autoencoders, we extract hidden plant characteristics (latent phenotypes) from these data sources. This study, conducted from 2018 to 2022 in Aurora, NY, with maize hybrids as part of the Genomes to Fields project, demonstrates that integrating these advanced sensing technologies can improve the accuracy of crop trait predictions, ultimately boosting agricultural productivity and sustainability. Technical Abstract: Recent progress in proximal remote sensing has elevated both the spatial and temporal resolution of data acquisition, expanding the accessibility of these technologies for digital agriculture applications. These advanced sensors enable the gathering of extensive and novel datasets, proving instrumental in accurately characterizing phenotypes and parameterizing models for crop growth. Despite the distinctive structural, spatial, and spectral information embedded in these data streams, they have predominantly been utilized in isolation. Thus, this research aims to integrate these disparate data sources to improve estimations of agronomically important crop traits, such as yield. Deep learning methods, such as autoencoders, will be used to extract latent phenotypes, which will be used to characterize manually measured traits. We focus on multispectral images (MSIs) collected by unoccupied aerial vehicles and lidar scans collected by unoccupied ground vehicles. MSIs capture canopy-level spectral information, including the red, green, blue, red edge, and near infrared bands. Lidar scans are converted to point clouds to construct the three-dimensional sub-canopy architecture of maize plants. Data were collected on maize hybrids as part of the Genomes to Fields project, from 2018 to 2022, in Aurora, NY. Autoencoder model training on MSIs shows that latent phenotypes are effective image representations, containing relevant and sufficient information to generate image reconstructions. The latent codes are also predictive of the image date and normalized difference vegetation index values. Latent phenotypes were extracted from the lidar point clouds as well, and the prediction accuracies of models using these measurements separately and jointly will be compared. |