Dan Morris
I use AI to help conservation practitioners spend less time doing boring things. Also see my Google Scholar profile and my personal site.
Research Areas
Authored Publications
Sort By
To crop or not to crop: comparing whole-image and cropped classification on a large dataset of camera trap images
Jorge Ahumada
Sara Beery
Stefan Istrate
Clint Kim
Tanya Birch
Tomer Gadot
IET Computer Vision (2024)
Preview abstract
Camera traps are frequently used for non-invasive monitoring of wildlife, but their widespread adoption has created a data processing bottleneck: a single camera trap survey can create millions of images, and the labor required to review those images strains the resources of conservation organizations. AI is a promising approach for accelerating image review (i.e., semi-automatically identifying the species that are present in each image), but AI tools for camera trap data are still imperfect; in particular, classifying small animals remains difficult, and accuracy falls off outside of the ecosystems in which a model was trained. It has been proposed that incorporating an object detector into a camera trap image analysis pipeline may help address these challenges, but the benefit of object detection for camera trap image analysis has not been systematically evaluated in the literature. In this work, we assess the hypothesis that classifying animals cropped from camera trap images using a species-agnostic detector will yield better accuracy than classifying whole images. We find that incorporating an object detection stage into an image classification pipeline yields a macro-average F1 improvement of around 25% on a very large, long-tailed dataset, and that this improvement is reproducible on a large public dataset and a smaller public benchmark dataset. We describe a classification architecture that performs well for both whole images and detector-cropped animals, and demonstrate that this architecture performs at a state-of-the-art level on a public benchmark dataset.
View details
A versatile, semi-automated image analysis workflow for time-lapse camera trap image classification
Hanna Böhner
Olga Pokrovskaya
Desheng Liu
Natalia Sokolova
Olivier Gilg
Wenbo Zhou
Ivan Fufachev
Peter Ungar
Rolf Anker Ims
Alexsandr Sokolov
Dorothee Ehrich
Gerardo Celis
Ecological Informatics (2024)
Preview abstract
Camera traps are a powerful, practical, and non-invasive method used widely to monitor animal communities and evaluate management actions. However, camera trap arrays can generate thousands to millions of images that require significant time and effort to review. Computer vision has emerged as a tool to accelerate this image review process. We propose a multi-step, semi-automated workflow which takes advantage of site-specific and generalizable models to improve detections and consists of (1) automatically identifying and removing low-quality images in parallel with classification into animals, humans, vehicles, and empty, (2) automatically cropping objects from images and classifying them (rock, bait, empty, and species), and (3) manually inspecting a subset of images. We trained and evaluated this approach using 548,627 images from 46 cameras in two regions of the Arctic: “Finnmark” (Finnmark County, Norway) and “Yamal” (Yamalo-Nenets Autonomous District, Russia). The automated steps yield image classification accuracies of 92% and 90% for the Finnmark and Yamal sets, respectively, reducing the number of images that required manual inspection to 9.2% of the Finnmark set and 3.9% of the Yamal set. The amount of time invested in developing models would be offset by the time saved from automation in about three seasons/years. Researchers can modify this multi-step process to develop their own site-specific models and meet other needs for monitoring and surveying wildlife, balancing the acceptable levels of false negatives and positives.
View details