SREL Reprint #3615

 

Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2

Michael A. Tabak1,2, Mohammad S. Norouzzadeh3, David W. Wolfson4, Erica J. Newton5,
Raoul K. Boughton6, Jacob S. Ivan7, Eric A. Odell7, Eric S. Newkirk7, Reesa Y. Conrey7,
Jennifer Stenglein8, Fabiola Iannarilli9, John Erb10, Ryan K. Brook11, Amy J. Davis12, Jesse Lewis13,
Daniel P. Walsh14, James C. Beasley15, Kurt C. VerCauteren16, Jeff Clune17, and Ryan S. Miller18

1Quantitative Science Consulting, LLC, Laramie, WY, USA
2Department of Zoology and Physiology, University of Wyoming, Laramie, WY, USA
3Computer Science Department, University of Wyoming, Laramie, WY, USA
4Minnesota Cooperative Fish and Wildlife Research Unit, Department of Fisheries,
Wildlife and Conservation Biology, University of Minnesota, St. Paul, MN, USA
5Wildlife Research and Monitoring Section, Ontario Ministry of Natural Resources and
Forestry, Peterborough, ON, Canada
6Range Cattle Research and Education Center, Wildlife Ecology and Conservation,
University of Florida, Ona, FL, USA
7Colorado Parks and Wildlife, Fort Collins, CO, USA
8Wisconsin Department of Natural Resources, Madison, WI, USA
9Conservation Sciences Graduate Program, University of Minnesota, St. Paul, MN, USA
10Forest Wildlife Populations and Research Group, Minnesota Department of Natural Resources,
Grand Rapids, MN, USA
11Department of Animal and Poultry Science, University of Saskatchewan, Saskatoon, SK, Canada
12National Wildlife Research Center, United States Department of Agriculture, Fort Collins, CO, USA
13College of Integrative Sciences and Arts, Arizona State University, Mesa, AZ, USA
14US Geological Survey, National Wildlife Health Center, Madison, WI, USA
15Savannah River Ecology Laboratory, Warnell School of Forestry and Natural Resources,
University of Georgia, Aiken, SC, USA
16National Wildlife Research Center, United States Department of Agriculture,
Animal and Plant Health Inspection Service, Fort Collins, CO, USA
17OpenAI, San Francisco, CA, USA
18Center for Epidemiology and Animal Health, United States Department of Agriculture,
Fort Collins, CO, USA

Abstract: Motion-activated wildlife cameras (or “camera traps”) are frequently used to remotely and noninvasively observe animals. The vast number of images collected from camera trap projects has prompted some biologists to employ machine learning algorithms to automatically recognize species in these images, or at least filter-out images that do not contain animals. These approaches are often limited by model transferability, as a model trained to recognize species from one location might not work as well for
the same species in different locations. Furthermore, these methods often require advanced computational skills, making them inaccessible to many biologists. We used 3 million camera trap images from 18 studies in 10 states across the United States of America to train two deep neural networks, one that recognizes 58 species, the “species model,” and one that determines if an image is empty or if it contains an animal,
the “empty-animal model.” Our species model and empty-animal model had accuracies of 96.8% and 97.3%, respectively. Furthermore, the models performed well on some out-of-sample datasets, as the species model had 91% accuracy on species from Canada (accuracy range 36%–91% across all out-of-sample datasets) and the empty-animal model achieved an accuracy of 91%–94% on out-of-sample datasets from different continents. Our software addresses some of the limitations of using machine learning to classify images from camera traps. By including many species from several locations, our species model is potentially applicable to many camera trap studies in North America. We also found that our empty-animal model can facilitate removal of images without animals globally. We provide the trained models in an R package (MLWIC2: Machine Learning for Wildlife Image Classification in R), which contains Shiny Applications that allow scientists with minimal programming experience to use trained models and train new models in six neural network architectures with varying depths.

Keywords: computer vision, deep convolutional neural networks, image classification, machine learning, motion-activated camera, R package, remote sensing, species identification

SREL Reprint #3615

Tabak, M. A., M. S. Norouzzadeh, D. W. Wolfson, E. J. Newton, R. K. Boughton, J. S. Ivan, E. A. Odell, E. S. Newkirk, R. Y. Conrey, J. Stenglein, F. Iannarilli, J. Erb, R. K. Brook, A. J. Davis, J. Lewis, D. P. Walsh, J. C. Beasley, K. C. VerCauteren, J. Clune, and R. S. Miller. 2020. Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2. Ecology and Evolution 10(2020): 10374-10383.

 

This information was provided by the University of Georgia's Savannah River Ecology Laboratory (srel.uga.edu).