2021 ESA Annual Meeting (August 2 - 6)

Deep learning for entomological discovery: Monarch butterfly classification and identification through image processing

On Demand
Kevin Y. Chen, Academy for Mathematics, Science, and Engineering;
Background/Question/Methods

Identifications of species, particularly in the scope of entomology, is a crucial aspect of gaining insights into trends in population. This is especially true when citizen scientists are integral to this effort. Monarch butterflies (Danaus plexippus) are famous for their migration routes in North America, whereby they travel from the United States and southern Canada to Mexico and back through a multigenerational return. However, in recent years, due to various factors including climate change, habitat loss, and food loss, the population numbers of this species have been declining and their migration patterns disrupted. In order to seek a rapid and efficient method of identification, we understand that there exist other species of butterfly, including the viceroy butterfly, that have similar phenotypes. Thus, we propose the development of a machine learning model to classify butterflies in imagery with the purpose of distinguishing monarch butterflies from “look-alikes.” Our methodology includes the gathering of a dataset sourced from Google Images consisting of images with these six species of butterflies contained in them and subsequently training a convolutional neural network (CNN) on the imagery for classification. This novel dataset includes images of the Monarch butterfly, viceroy butterfly, red admiral, painted lady, queen butterfly, soldier butterfly, with a distribution of 200192, 98765, 53654, 26578, 12534, and 14213 images, respectively.

Results/Conclusions

We train a baseline CNN to evaluate the dataset and its usefulness for the monarch butterfly versus look-alike species classification problem. The input consists of a labeled butterfly image, with the species and bounding box coordinates. The output is a digit from 0 to 5 representing the predicted species of the butterfly in the image. The model architecture is ResNet50, pretrained on ImageNet data. The criterion for optimization is the cross-entropy loss function. We train on a randomly selected 80% of the dataset with a batch size of 32. The Adam optimizer with a learning rate of 0.01 is utilized. The network is trained for 100 epochs on NVIDIA Tesla K80 GPUs. Testing on the remaining 20% of the data, our baseline model achieves a weighted F1 score of 0.824. This result indicates that our model performed fairly accurately; however, we note that when examining the individual F1-scores for each butterfly category, Danaus plexippus and Limenitis archippus have the lowest statistics. This is justifiable because they arguably have the most similar phenotypes visually. Future work includes fine-tuning and optimizing models for more optimal performance.