Location: Foreign Arthropod Borne Animal Disease Research
Project Number: 3022-32000-062-011-S
Project Type: Non-Assistance Cooperative Agreement
Start Date: Sep 1, 2023
End Date: Aug 31, 2026
Objective:
The cooperator will develop a system for gathering infrared spectroscopy data paired with visual light images of adult mosquito and tick specimens. Initial validation of the system will be performed by verifying the differentiability of separate populations in a small dataset of laboratory reared specimens. The data collection system will be used by partner mosquito and tick surveillance organizations on field collected specimens to create a high quality dataset of ticks and mosquitos, recording critical variables such as specimen age, pathogen status, species, and parity status. The dataset will support development of deep learning algorithms for automated identification of these variables. Algorithm development for visual light images will be initialized by Vthe cooperator's existing algorithm for image based species identification using convolutional neural networks currently deployed in the device, the cooperator's smart digital microscope for vector identification. The cooperator will build on these methods to fuse the visual and infrared signals for high fidelity identification of species and parity, with future work to be devoted to continuous age grading and pathogen detection.
Approach:
The cooperator will develop the integrated data acquisition system for visual imaging paired with Mid-infared spectra (MIRS). This system will be validated in consultation with domain experts before distribution for preliminary use by two select partners, identified by USDA. Resulting data will be used to train algorithms for the identification of disease and parity status.
Aim 1: Develop and validate a data collection system pairing MIR spectra and multi-angle high resolution images.
Aim 2: Collect paired MIRS and image data on 2,000 specimens in partnership with select partners already collecting parity or infection status of their specimens.
Aim 3. Develop preliminary discriminative deep learning methods to process and fuse paired inputs of visual images and MIR spectra.