A versatile, semi-automated image analysis workflow for time-lapse camera trap image classification

Hanna Böhner
Olga Pokrovskaya
Desheng Liu
Natalia Sokolova
Olivier Gilg
Wenbo Zhou
Ivan Fufachev
Peter Ungar
Rolf Anker Ims
Alexsandr Sokolov
Dorothee Ehrich
Gerardo Celis
Ecological Informatics(2024)

Abstract

Camera traps are a powerful, practical, and non-invasive method used widely to monitor animal communities and evaluate management actions. However, camera trap arrays can generate thousands to millions of images that require significant time and effort to review. Computer vision has emerged as a tool to accelerate this image review process. We propose a multi-step, semi-automated workflow which takes advantage of site-specific and generalizable models to improve detections and consists of (1) automatically identifying and removing low-quality images in parallel with classification into animals, humans, vehicles, and empty, (2) automatically cropping objects from images and classifying them (rock, bait, empty, and species), and (3) manually inspecting a subset of images. We trained and evaluated this approach using 548,627 images from 46 cameras in two regions of the Arctic: “Finnmark” (Finnmark County, Norway) and “Yamal” (Yamalo-Nenets Autonomous District, Russia). The automated steps yield image classification accuracies of 92% and 90% for the Finnmark and Yamal sets, respectively, reducing the number of images that required manual inspection to 9.2% of the Finnmark set and 3.9% of the Yamal set. The amount of time invested in developing models would be offset by the time saved from automation in about three seasons/years. Researchers can modify this multi-step process to develop their own site-specific models and meet other needs for monitoring and surveying wildlife, balancing the acceptable levels of false negatives and positives.

Research Areas