Location: Insect Behavior and Biocontrol Research
Title: DeCapsGAN: generative adversarial capsule network for image denoisingAuthor
LYU, Q - Shaanxi Normal University | |
GUO, M - Shaanxi Normal University | |
PEI, Z - Shaanxi Normal University | |
Mankin, Richard |
Submitted to: Journal of Electronic Imaging
Publication Type: Peer Reviewed Journal Publication Acceptance Date: 5/7/2021 Publication Date: 6/2/2021 Citation: Lyu, Q., Guo, M., Pei, Z., Mankin, R.W. 2021. DeCapsGAN: generative adversarial capsule network for image denoising. Journal of Electronic Imaging. 30(3) Article 0330612. https://doi.org/10.1117/1.JEI.30.3.033016. DOI: https://doi.org/10.1117/1.JEI.30.3.033016 Interpretive Summary: Insect pest create sounds as they feed and move which can be detected using acoustic methods of detection. However, various background noise sources corrupt electronic signals collected by acoustic and visual sensors. Random noise can be eliminated by application of Gaussian noise statistics, but impulse noise generated by electronic sensor and transmission errors has a much different, unpredictable distribution. Scientists and students at the Shaanxi Normal University, China, in collaboration with scientists at the USDA-ARS Center for Medical, Agricultural, and Veterinary Entomology, Gainesville, Florida, have developed improved methods of removing mixed, random and impulse noise from electronic signals. The method was tested by artificially adding mixed noise to “clean” visual images and comparing the speed and accuracy of the new DeGAN "adversarial network" method with 6 other denoising methods. Applying the DeGAN method to the test signal improved image clarity and the speed of denoising was greatly decreased. Application of the DeGAN method to remove background noise from field collected insect acoustic signals is being employed as a means to discriminate insect sounds from background noise and improve detection of insect pests in field environments. Technical Abstract: Restoration of images corrupted by mixed noise (e.g., additive white Gaussian noise and impulse noise) is very difficult due to the complexity of the mixed noise distribution. Various mixed noise removal models involve the preprocessing based on outlier detection. However, the performance of these models largely depends on the accuracy of pixel location detection of outliers, and artifacts and missing image details are prone to occur when the mixture noise is strong. In this paper, a new denoising model based on generative adversarial network (DeGAN) is proposed to remove mixed noise in images. The proposed model combines generator, discriminator, and feature extractor networks. Through the mutual game between the generator and discriminator networks combined with additional training from the feature extractor network, the generator network implements a direct mapping from the noisy image domain to the noise-free image domain. In addition, we design a new joint loss function to incorporate information from image features and human visual perception into the mixed noise elimination task, which further improves the image quality and the visual effect. Abundant experiments show that the performance of our model is better than the state-of-the-art mixed noise removal methods in three different types of mixed noise scenarios, and the joint loss function does improve the denoising performance. |