Thu, Aug 05, 2021:On Demand
Background/Question/Methods
Identifying and characterizing plant species is important in plant diversity assessments. Traditional methods often use manual measurements which are time-consuming and necessarily of limited spatial scale. To address this challenge, we used a U-net deep learning architecture to automatically segment plant species in images taken on field plots and used these segmentations to estimate plant cover. We collected imagery using iPhone and Unmanned Aerial Systems (UAS) based hyperspectral imaging systems. The plots had monoculture and mixed species of legume, annual grass, and forb functional groups. We trained the model using images for monoculture plots for our current study and the ground truth masks were created using a color-threshold method that separates canopy cover from bare ground.
Results/Conclusions As the complexity of the segmentation tasks increased, the models’ performance decreased in accuracy. The model best identified differences between soil and vegetation even in images of high vegetation cover. Using Intersection over Union as performance metric, the model achieved 85% accuracy on segmenting vegetation from soil. Performance decreased as the model was tasked with identifying individual species within the complex imagery. The best performing segmentation species class was canola which achieved 75% IoU accuracy. The worst performing class was vetch with 22% accuracy. The large margin in prediction accuracy shows that the model struggles in identifying clear differences between species. Based on these preliminary results, our strategy to improve model performance will be to generate a heatmap to better explain which parts of the images are being misclassified, to introduce sparse examples of each species during training to assist the model to find disparities between species, such as acute differences in vegetation leaves, and to introduce multispecies diversity into training images.
Results/Conclusions As the complexity of the segmentation tasks increased, the models’ performance decreased in accuracy. The model best identified differences between soil and vegetation even in images of high vegetation cover. Using Intersection over Union as performance metric, the model achieved 85% accuracy on segmenting vegetation from soil. Performance decreased as the model was tasked with identifying individual species within the complex imagery. The best performing segmentation species class was canola which achieved 75% IoU accuracy. The worst performing class was vetch with 22% accuracy. The large margin in prediction accuracy shows that the model struggles in identifying clear differences between species. Based on these preliminary results, our strategy to improve model performance will be to generate a heatmap to better explain which parts of the images are being misclassified, to introduce sparse examples of each species during training to assist the model to find disparities between species, such as acute differences in vegetation leaves, and to introduce multispecies diversity into training images.