Species in Jura study site | n | Species in Ain study site | n |
---|---|---|---|
human | 31644 | human | 4946 |
vehicule | 5637 | vehicule | 4454 |
dog | 2779 | dog | 2310 |
fox | 2088 | fox | 1587 |
chamois | 919 | rider | 1025 |
wild boar | 522 | roe deer | 860 |
badger | 401 | chamois | 780 |
roe deer | 368 | hunter | 593 |
cat | 343 | wild boar | 514 |
lynx | 302 | badger | 461 |
1 Introduction
Computer vision is a field of artificial intelligence in which a machine is taught how to extract and interpret the content of an image (Krizhevsky, Sutskever, and Hinton 2012). Computer vision relies on deep learning that allows computational models to learn from training data – a set of manually labelled images – and make predictions on new data – a set of unlabelled images (Baraniuk, Donoho, and Gavish 2020; LeCun, Bengio, and Hinton 2015). With the growing availability of massive data, computer vision with deep learning is being increasingly used to perform tasks such as object detection, face recognition, action and activity recognition or human pose estimation in fields as diverse as medicine, robotics, transportation, genomics, sports and agriculture (Voulodimos et al. 2018).
In ecology in particular, there is a growing interest in deep learning for automatizing repetitive analyses on large amounts of images, such as identifying plant and animal species, distinguishing individuals of the same or different species, counting individuals or detecting relevant features (Christin, Hervet, and Lecomte 2019; Lamba et al. 2019; Weinstein 2018). By saving hours of manual data analyses and tapping into massive amounts of data that keep accumulating with technological advances, deep learning has the potential to become an essential tool for ecologists and applied statisticians.
Despite the promising future of computer vision and deep learning, there are challenging issues toward their wide adoption by the community of ecologists (e.g. Wearn, Freeman, and Jacoby 2019). First, there is a programming barrier as most algorithms are written in the Python
language (but see MXNet in R and the R interface to Keras) while most ecologists are versed in R
(Lai et al. 2019). If ecologists are to use computer vision in routine, there is a need for bridges between these two languages (through, e.g., the reticulate
package Allaire et al. (2017) or the shiny
package Tabak et al. (2020)). Second, ecologists may be reluctant to develop deep learning algorithms that require large amounts of computation time and consequently come with an environmental cost due to carbon emissions (Strubell, Ganesh, and McCallum 2019). Third, recent applications of computer vision via deep learning in ecology have focused on computational aspects and simple tasks without addressing the underlying ecological questions (Sutherland et al. 2013), or carrying out statistical data analysis to answer these questions (Gimenez et al. 2014). Although perfectly understandable given the challenges at hand, we argue that a better integration of the why (ecological questions), the what (automatically labelled images) and the how (statistics) would be beneficial to computer vision for ecology (see also Weinstein 2018).
Here, we showcase a full why-what-how workflow in R
using a case study on the structure of an ecological community (a set of co-occurring species) composed of the Eurasian lynx (Lynx lynx) and its two main preys. First, we introduce the case study and motivate the need for deep learning. Second we illustrate deep learning for the identification of animal species in large amounts of images, including model training and validation with a dataset of labelled images, and prediction with a new dataset of unlabelled images. Last, we proceed with the quantification of spatial co-occurrence using statistical models.
2 Collecting images with camera traps
Lynx (Lynx lynx) went extinct in France at the end of the 19th century due to habitat degradation, human persecution and decrease in prey availability (Vandel and Stahl 2005). The species was reintroduced in Switzerland in the 1970s (Breitenmoser 1998), then re-colonised France through the Jura mountains in the 1980s (Vandel and Stahl 2005). The species is listed as endangered under the 2017 IUCN Red list and is of conservation concern in France due to habitat fragmentation, poaching and collisions with vehicles. The Jura holds the bulk of the French lynx population.
To better understand its distribution, we need to quantify its interactions with its main preys, roe deer (Capreolus capreolus) and chamois (Rupicapra rupicapra) (Molinari-Jobin et al. 2007), two ungulate species that are also hunted. To assess the relative contribution of predation and hunting to the community structure and dynamics, a predator-prey program was set up jointly by the French Office for Biodiversity, the Federations of Hunters from the Jura, Ain and Haute-Savoie counties and the French National Centre for Scientific Research. Animal detections were made using a set of camera traps in the Jura mountains that were deployed in the Jura and Ain counties (see Figure 1). Altitude in the Jura site ranges from 520m to 1150m, and from 400m to 950m for the Ain site. Woodland areas cover 69% of the Ain site, with deciduous forests (63%) followed by coniferous (19.5%) and mixed forest (12.5%). In the Jura site, woodland areas cover 62% of the area, with mixed forests (46.6%), deciduous forests (37.3%) and coniferous (14%). In both sites, the remaining habitat is meadows used by cattle.
We divided the two study areas into grids of 2.7 \times 2.7 km cells or sites hereafter (Zimmermann et al. 2013) in which we set two camera traps per site (Xenon white flash with passive infrared trigger mechanisms, model Capture, Ambush and Attack; Cuddeback), with 18 sites in the Jura study area, and 11 in the Ain study area that were active over the study period (from February 2016 to October 2017 for the Jura county, and from February 2017 to May 2019 for the Ain county). The location of camera traps was chosen to maximise lynx detection. More precisely, camera traps were set up along large paths in the forest, on each side of the path at 50cm high. Camera traps were checked weekly to change memory cards, batteries and to remove fresh snow after heavy snowfall.
In total, 45563 and 18044 pictures were considered in the Jura and Ain sites respectively after manually droping empty pictures and pictures with unidentified species. Note that classifying empty images could be automatised with deep learning (Norouzzadeh et al. 2021; Tabak et al. 2020). We identified the species present on all images by hand (see Table 1) using digiKam
a free open-source digital photo management application (https://www.digikam.org/). This operation took several weeks of labor full time, which is often identified as a limitation of camera trap studies. To expedite this tedious task, computer vision with deep learning has been identified as a promising approach (Norouzzadeh et al. 2021; Tabak et al. 2019; Willi et al. 2019).
3 Deep learning for species identification
Using the images we obtained with camera traps (Table 1), we trained a model for identifying species using the Jura study site as a calibration dataset. We then assessed this model’s ability to automatically identify species on a new dataset, also known as transferability, using the Ain study site as an evaluation dataset. Even though in the present work we quantified co-occurrence between lynx and its prey, we included other species in the training to investigate the structure and dynamics of the entire community in future work. Also, the use of specific species categories instead of just a “other” category besides the focal species should help the algorithm to determine with better confidence when a picture does not contain a focal species in situations where there is no doubt that this is another species (think of a vehicle for example), or where a species is detected with which a focal species can be confused, e.g. lynx with fox.
3.1 Training - Jura study site
We selected at random 80% of the annotated images for each species in the Jura study site for training, and 20% for testing. We applied various transformations (flipping, brightness and contrast modifications following Shorten and Khoshgoftaar (2019)) to improve training (see Appendix). To reduce model training time and overcome the small number of images, we used transfer learning (Yosinski et al. 2014; Shao, Zhu, and Li 2015) and considered a pre-trained model as a starting point. Specifically, we trained a deep convolutional neural network (ResNet-50) architecture (He et al. 2016) using the fastai
library (https://docs.fast.ai/) that implements the PyTorch
library (Paszke et al. 2019). Interestingly, the fastai
library comes with an R
interface (https://eagerai.github.io/fastai/) that uses the reticulate
package to communicate with Python
, therefore allowing R
users to access up-to-date deep learning tools. We trained models on the Montpellier Bioinformatics Biodiversity platform using a GPU machine (Titan Xp nvidia) with 16Go of RAM. We used 20 epochs which took approximately 10 hours. The computational burden prevented us from providing a full reproducible analysis, but we do so with a subsample of the dataset in the Appendix. All trained models are available from https://doi.org/10.5281/zenodo.5164796.
Using the testing dataset, we calculated three metrics to evaluate our model performance at correctly identifying species (e.g. Duggan et al. 2021). Specifically, we relied on accuracy the ratio of correct predictions to the total number of predictions, recall a measure of false negatives (FN; e.g. an image with a lynx for which our model predicts another species) with recall = TP / (TP + FN) where TP is for true positives, and precision a measure of false positives (FP; e.g. an image with any species but a lynx for which our model predicts a lynx) with precision = TP / (TP + FP). In camera trap studies, a strategy (Duggan et al. 2021) consists in optimizing precision if the focus is on rare species (lynx), while recall should be optimized if the focus is on commom species (chamois and roe deer).
We achieved 85% accuracy during training. Our model had good performances for the three classes we were interested in, with 87% precision for lynx and 81% recall for both roe deer and chamois (Table 2).
species | precision | recall |
---|---|---|
badger | 0.78 | 0.88 |
red deer | 0.67 | 0.21 |
chamois | 0.86 | 0.81 |
cat | 0.89 | 0.78 |
roe deer | 0.67 | 0.81 |
dog | 0.78 | 0.84 |
human | 0.99 | 0.79 |
hare | 0.32 | 0.52 |
lynx | 0.87 | 0.95 |
fox | 0.85 | 0.90 |
wild boar | 0.93 | 0.88 |
vehicule | 0.95 | 0.98 |
3.2 Transferability - Ain study site
We evaluated transferability for our trained model by predicting species on images from the Ain study site which were not used for training. Precision was 77% for lynx, and while we achieved 86% recall for roe deer, our model performed poorly for chamois with 8% recall (Table 3).
precision | recall | |
---|---|---|
badger | 0.71 | 0.89 |
rider | 0.79 | 0.92 |
red deer | 0.00 | 0.00 |
chamois | 0.82 | 0.08 |
hunter | 0.17 | 0.11 |
cat | 0.46 | 0.59 |
roe deer | 0.67 | 0.86 |
dog | 0.77 | 0.35 |
human | 0.51 | 0.93 |
hare | 0.37 | 0.35 |
lynx | 0.77 | 0.89 |
marten | 0.05 | 0.04 |
fox | 0.90 | 0.53 |
wild boar | 0.75 | 0.94 |
cow | 0.01 | 0.25 |
vehicule | 0.94 | 0.51 |
To better understand this pattern, we display the results under the form of a confusion matrix that compares model classifications to manual classifications (Figure 2). There were a lot of false negatives for chamois, meaning that when a chamois was present in an image, it was often classified as another species by our model.
Overall, our model trained on images from the Jura study site did poorly at correctly predicting species on images from the Ain study site. This result does not come as a surprise, as generalizing classification algorithms to new environments is known to be difficult (Beery, Horn, and Perona 2018). While a computer scientist might be disappointed in these results, an ecologist would probably wonder whether ecological inference about the co-occurrence between lynx and its prey is biased by these average performances, a question we address in the next section.
4 Spatial co-occurrence
Here, we analysed the data we acquired from the previous section. For the sake of comparison, we considered two datasets, one made of the images manually labelled for both the Jura and Ain study sites pooled together (ground truth dataset), and the other in which we pooled the images that were manually labelled for the Jura study site and the images that were automatically labelled for the Ain study site using our trained model (classified dataset).
We formatted the data by generating monthly detection histories, that is a sequence of detections (Y_{sit} = 1) and non-detections (Y_{sit} = 0), for species s at site i and sampling occasion t (see Figure 3).
To quantify spatial co-occurrence betwen lynx and its preys, we used a multispecies occupancy modeling approach (Rota et al. 2016) implemented in the R
package unmarked
(Fiske and Chandler 2011) within the maximum likelihood framework. The multispecies occupancy model assumes that observations y_{sit}, conditional on Z_{si} the latent occupancy state of species s at site i are drawn from Bernoulli random variables Y_{sit} | Z_{si} \sim \text{Bernoulli}(Z_{si}p_{sit}) where p_{sit} is the detection probability of species s at site i and sampling occasion t. Detection probabilities can be modeled as a function of site and/or sampling covariates, or the presence/absence of other species, but for the sake of illustration, we will make them only species-specific here.
The latent occupancy states are assumed to be distributed as multivariate Bernoulli random variables (Dai, Ding, and Wahba 2013). Let us consider 2 species, species 1 and 2, then Z_i = (Z_{i1}, Z_{i2}) \sim \text{multivariate Bernoulli}(\psi_{11}, \psi_{10}, \psi_{01}, \psi_{00}) where \psi_{11} is the probability that a site is occupied by both species 1 and 2, \psi_{10} the probability that a site is occupied by species 1 but not 2, \psi_{01} the probability that a site is occupied by species 2 but not 1, and \psi_{00} the probability a site is occupied by none of them. Note that we considered species-specific only occupancy probabilities but these could be modeled as site-specific covariates. Marginal occupancy probabilities are obtained as \Pr(Z_{i1}=1) = \psi_{11} + \psi_{10} and \Pr(Z_{i2}=1) = \psi_{11} + \psi_{01}. With this model, we may also infer co-occurrence by calculating conditional probabilities such as for example the probability of a site being occupied by species 2 conditional of species 1 with \Pr(Z_{i2} = 1| Z_{i1} = 1) = \displaystyle{\frac{\psi_{11}}{\psi_{11}+\psi_{10}}}.
Despite its appeal and increasing use in ecology, multispecies occupancy models can be difficult to fit to real-world data in practice. First, these models are data-hungry and regularization methods (Clipp et al. 2021) are needed to avoid occupancy probabilities to be estimated at the boundary of the parameter space or with large uncertainty. Second, and this is true for any joint species distribution models, these models quickly become very complex with many parameters to be estimated when the number of species increases and co-occurrence is allowed between all species. Here, ecological expertise should be used to consider only meaningful species interactions and apply parsimony when parameterizing models.
We now turn to the results obtained from a model with five species namely lynx, chamois, roe deer, fox and cat and co-occurrence allowed between lynx and chamois and roe deer only.
Detection probabilities were indistinguishable (at the third decimal) whether we used the ground truth or the classified dataset, with p_{\text{lynx}} = 0.51 (0.45, 0.58), p_{\text{roe deer}} = 0.63 (0.57, 0.68) and p_{\text{chamois}} = 0.61 (0.55, 0.67).
We also found that occupancy probability estimates were similar whether we used the ground truth or the classified dataset (Figure 4). Roe deer was the most prevalent species, but lynx and chamois were also occurring with high probability (Figure 4). Note that, despite chamois being often misclassified (Figure 2), its marginal occupancy tends to be higher when estimated with the classified dataset. Ecologically speaking, this might well be the case if the correctly classified detections are spread over all camera traps. The difference in marginal occupancy seems however non-significant judging by the overlap between the two confidence intervals.
Because marginal occupancy probabilities were high, probabilities of co-occurrence were also estimated high (Figure 5). Our results should be interpreted bearing in mind that co-occurrence is a necessary but not sufficient condition for actual interaction. When both preys were present, lynx was more present than when they were both absent (Figure 5). Lynx was more sensitive to the presence of roe deer than that of chamois (Figure 5).
Overall, we found similar or higher uncertainty in estimates obtained from the classified dataset (Figure 4 and Figure 5). Sample size being similar for both datasets, we do not have a solid explanation for this pattern.
5 Discussion
In this paper, we aimed at illustrating a reproducible workflow for studying the structure of an animal community and species spatial co-occurrence (why) using images acquired from camera traps and automatically labelled with deep learning (what) which we analysed with statistical occupancy models accounting for imperfect species detection (how). Overall, we found that, even though model transferability could be improved, inference about the co-occurrence of lynx and its preys was similar whether we analysed the ground truth data or classified data.
This result calls for further work on the trade-offs between time and resources allocated to train models with deep learning and our ability to correctly answer key ecological questions with camera-trap surveys. In other words, while a computer scientist might be keen on spending time training models to achieve top performances, an ecologist would rather rely on a model showing average performances and use this time to proceed with statistical analyses if, of course, errors in computer-annotated images do not make ecological inference flawed. The right balance may be found with collaborative projects in which scientists from artificial intelligence, statistics and ecology agree on a common objective, and identify research questions that can pick the interest of all parties.
Our demonstration remains however empirical, and ecological inference might no longer be robust to misclassification if detection and non-detections were pooled weekly or daily, or if more complex models, e.g. including time-varying detection probabilities and/or habitat-specific occupancy probabilities, were fitted to the data. Therefore, we encourage others to try and replicate our results. In that spirit, we praise previous work on plants which used deep learning to produce occurrence data and tested the sensitivity of species distribution models to image classification errors (Botella et al. 2018). We also see two avenues of research that could benefit the integration of deep learning and ecological statistics. First, a simulation study could be conducted to evaluate bias and precision in ecological parameter estimators with regard to errors in image annotation by computers. The outcome of this exercise could be, for example, guidelines informing on the confidence an investigator may place in ecological inference as a function of the amount of false negatives and false positives. Second, annotation errors could be accomodated directly in statistical models. For example, single-species occupancy models account for false negatives when a species is not detected by the camera at a site where it is present, as well as false positives when a species is detected at a site where it is not present due to species misidentification by the observer (Miller et al. 2011). Pending a careful distinction between ecological vs. computer-generated false negatives and false positives, error rates could be added to multispecies occupancy models (Chambert et al. 2018) and informed by recall and precision metrics obtained during model training (Tabak et al. 2020). An alternative quick and dirty approach would consist in adopting a Monte Carlo approach by sampling the species detected or non-detected in each picture according to its predicted probability of belonging to a given class, then building the corresponding dataset and fitting occupancy models to it for each sample.
When it comes to the case study, our results should be discussed with regard to the sensitivity of co-occurrence estimates to errors in automatic species classification. In particular, we expected that confusions between the two prey species might artificially increase the estimated probability of co-occurrence with lynx. This was illustrated by \Pr(\text{lynx present} | \text{roe deer present and chamois absent}) (resp. \Pr(\text{lynx present} | \text{roe deer absent and chamois present})) being estimated higher (resp. lower) with the classified than the ground truth dataset (Figure 5). This pattern could be explained by chamois being often classified as (and confused with) roe deer (Figure 2).
Our results are only preliminary and we see several perspectives to our work. First, we focused our analysis on lynx and its main prey, while other species should be included to get a better understanding of the community structure. For example, both lynx and fox prey on small rodents and birds and a model including co-occurrence between these two predators showed better support by the data (AIC was 1544 when co-occurrence was included vs. 1557 when it was not). Second, we aim at quantifying the relative contribution of biotic (lynx predation on chamois and roe deer) and abiotic (habitat quality) processes to the composition and dynamic of this ecological community. Third, to benefit future camera trap studies of lynx in the Jura mountains, we plan to train a model again using more manually annotated images from both the Jura and the Ain study sites. These perspectives are the object of ongoing work.
With the rapid advances in technologies for biodiversity monitoring (Lahoz-Monfort and Magrath 2021), the possibility of analysing large amounts of images makes deep learning appealing to ecologists. We hope that our proposal of a reproducible R
workflow for deep learning and statistical ecology will encourage further studies in the integration of these disciplines, and contribute to the adoption of computer vision by ecologists.
6 Appendix: Reproducible example of species identification on camera trap images with CPU
In this section, we go through a reproducible example of the entire deep learning workflow, including data preparation, model training, and automatic labeling of new images. We used a subsample of 467 images from the original dataset in the Jura county to allow the training of our model with CPU on a personal computer. We also used 14 images from the original dataset in the Ain county to illustrate prediction.
6.1 Training and validation datasets
We first split the dataset of Jura images in two datasets, a dataset for training, and the other one for validation. We use the exifr
package to extract metadata from images, get a list of images names and extract the species from these.
Hide/Show the code
library(exifr)
<- 'pix/pixJura/'
pix_folder <- list.files(path = pix_folder,
file_list recursive = TRUE,
pattern = "*.jpg",
full.names = TRUE)
<-
labels read_exif(file_list) %>%
as_tibble() %>%
unnest(Keywords, keep_empty = TRUE) %>% # keep_empty = TRUE keeps pix with no labels (empty pix)
group_by(SourceFile) %>%
slice_head() %>% # when several labels in a pix, keep first only
ungroup() %>%
mutate(Keywords = as_factor(Keywords)) %>%
mutate(Keywords = fct_explicit_na(Keywords, "wo_tag")) %>% # when pix has no tag
select(SourceFile, FileName, Keywords) %>%
mutate(Keywords = fct_recode(Keywords,
"chat" = "chat forestier",
"lievre" = "lièvre",
"vehicule" = "véhicule",
"ni" = "Non identifié")) %>%
filter(!(Keywords %in% c("ni", "wo_tag")))
Keywords | n |
---|---|
humain | 143 |
vehicule | 135 |
renard | 58 |
sangliers | 33 |
chasseur | 17 |
chien | 14 |
lynx | 13 |
chevreuil | 13 |
chamois | 12 |
blaireaux | 10 |
chat | 8 |
lievre | 4 |
fouine | 1 |
cavalier | 1 |
Then we pick 80\% of the images for training in each category, the rest being used for validation.
Hide/Show the code
# training dataset
<- labels %>%
pix_train select(SourceFile, FileName, Keywords) %>%
group_by(Keywords) %>%
filter(between(row_number(), 1, floor(n()*80/100))) # 80% per category
# validation dataset
<- labels %>%
pix_valid group_by(Keywords) %>%
filter(between(row_number(), floor(n()*80/100) + 1, n()))
Eventually, we store these images in two distinct directories named train
and valid
.
Hide/Show the code
# create dir train/ and copy pix there, organised by categories
dir.create('pix/train') # create training directory
for (i in levels(fct_drop(pix_train$Keywords))) dir.create(paste0('pix/train/',i)) # create dir for labels
for (i in 1:nrow(pix_train)){
file.copy(as.character(pix_train$SourceFile[i]),
paste0('pix/train/', as.character(pix_train$Keywords[i]))) # copy pix in corresp dir
}# create dir valid/ and copy pix there, organised by categories.
dir.create('pix/valid') # create validation dir
for (i in levels(fct_drop(pix_train$Keywords))) dir.create(paste0('pix/valid/',i)) # create dir for labels
for (i in 1:nrow(pix_valid)){
file.copy(as.character(pix_valid$SourceFile[i]),
paste0('pix/valid/', as.character(pix_valid$Keywords[i]))) # copy pix in corresp dir
}# delete pictures in valid/ directory for which we did not train the model
<- setdiff(levels(fct_drop(pix_valid$Keywords)), levels(fct_drop(pix_train$Keywords)))
to_be_deleted if (!is_empty(to_be_deleted)) {
for (i in 1:length(to_be_deleted)){
unlink(paste0('pix/valid/', to_be_deleted[i]))
} }
What is the sample size of these two datasets?
Hide/Show the code
bind_rows("training" = pix_train, "validation" = pix_valid, .id = "dataset") %>%
group_by(dataset) %>%
count(Keywords) %>%
rename(category = Keywords) %>%
kable() %>%
kable_styling()
dataset | category | n |
---|---|---|
training | humain | 114 |
training | vehicule | 108 |
training | chamois | 9 |
training | blaireaux | 8 |
training | sangliers | 26 |
training | renard | 46 |
training | chasseur | 13 |
training | lynx | 10 |
training | chien | 11 |
training | chat | 6 |
training | chevreuil | 10 |
training | lievre | 3 |
validation | humain | 29 |
validation | vehicule | 27 |
validation | chamois | 3 |
validation | blaireaux | 2 |
validation | sangliers | 7 |
validation | renard | 12 |
validation | chasseur | 4 |
validation | lynx | 3 |
validation | chien | 3 |
validation | fouine | 1 |
validation | chat | 2 |
validation | chevreuil | 3 |
validation | lievre | 1 |
validation | cavalier | 1 |
6.2 Transfer learning
We proceed with transfer learning using images from the Jura county (or a subsample more exactly). We first load images and apply standard transformations to improve training (flip, rotate, zoom, rotate, ligth transform).
Hide/Show the code
library(reticulate)
#reticulate::use_condaenv("computo")
library(fastai)
<- ImageDataLoaders_from_folder(
dls path = "pix/",
train = "train",
valid = "valid",
item_tfms = Resize(size = 460),
bs = 10,
batch_tfms = list(aug_transforms(size = 224,
min_scale = 0.75), # transformation
Normalize_from_stats( imagenet_stats() )),
num_workers = 0,
ImageFile.LOAD_TRUNCATED_IMAGES = TRUE)
Then we get the model architecture. For the sake of illustration, we use a resnet18 here, but we used a resnet50 to get the full results presented in the main text.
Hide/Show the code
<- cnn_learner(dls = dls,
learn arch = resnet18(),
metrics = list(accuracy, error_rate))
Now we are ready to train our model. Again, for the sake of illustration, we use only 2 epochs here, but used 20 epochs to get the full results presented in the main text. With all pictures and a resnet50, it took 75 minutes per epoch approximatively on a Mac with a 2.4Ghz processor and 64Go memory, and less than half an hour on a machine with GPU. On this reduced dataset, it took a bit more than a minute per epoch on the same Mac. Note that we save the model after each epoch for later use.
Hide/Show the code
<- learn %>%
one_cycle fit_one_cycle(2, cbs = SaveModelCallback(every_epoch = TRUE,
fname = 'model'))
epoch train_loss valid_loss accuracy error_rate time
------ ----------- ----------- --------- ----------- -----
Epoch 1/2 : |----------------------------------------| 0.00% [0/36
Epoch 1/2 : |---------------------------------------| 2.78% [1/36
Epoch 1/2 : |--------------------------------------| 5.56% [2/36
Epoch 1/2 : |-------------------------------------| 8.33% [3/36
Epoch 1/2 : |------------------------------------| 11.11% [4/36
Epoch 1/2 : |-----------------------------------| 13.89% [5/36
Epoch 1/2 : |----------------------------------| 16.67% [6/36
Epoch 1/2 : |---------------------------------| 19.44% [7/36
Epoch 1/2 : |--------------------------------| 22.22% [8/36
Epoch 1/2 : |------------------------------| 25.00% [9/36
Epoch 1/2 : |-----------------------------| 27.78% [10/36
Epoch 1/2 : |----------------------------| 30.56% [11/36
Epoch 1/2 : |---------------------------| 33.33% [12/36
Epoch 1/2 : |--------------------------| 36.11% [13/36
Epoch 1/2 : |-------------------------| 38.89% [14/36
Epoch 1/2 : |------------------------| 41.67% [15/36
Epoch 1/2 : |-----------------------| 44.44% [16/36
Epoch 1/2 : |----------------------| 47.22% [17/36
Epoch 1/2 : |--------------------| 50.00% [18/36
Epoch 1/2 : |-------------------| 52.78% [19/36
Epoch 1/2 : |------------------| 55.56% [20/36
Epoch 1/2 : |-----------------| 58.33% [21/36
Epoch 1/2 : |----------------| 61.11% [22/36
Epoch 1/2 : |---------------| 63.89% [23/36
Epoch 1/2 : |--------------| 66.67% [24/36
Epoch 1/2 : |-------------| 69.44% [25/36
Epoch 1/2 : |------------| 72.22% [26/36
Epoch 1/2 : |----------| 75.00% [27/36
Epoch 1/2 : |---------| 77.78% [28/36
Epoch 1/2 : |--------| 80.56% [29/36
Epoch 1/2 : |-------| 83.33% [30/36
Epoch 1/2 : |------| 86.11% [31/36
Epoch 1/2 : |-----| 88.89% [32/36
Epoch 1/2 : |----| 91.67% [33/36
Epoch 1/2 : |---| 94.44% [34/36
Epoch 1/2 : |--| 97.22% [35/36
Epoch 1/2 : || 100.00% [36/36
Epoch 1/2 : |----------------------------------------| 0.00% [0/10
Epoch 1/2 : |------------------------------------| 10.00% [1/10
Epoch 1/2 : |--------------------------------| 20.00% [2/10
Epoch 1/2 : |----------------------------| 30.00% [3/10
Epoch 1/2 : |------------------------| 40.00% [4/10
Epoch 1/2 : |--------------------| 50.00% [5/10
Epoch 1/2 : |----------------| 60.00% [6/10
Epoch 1/2 : |------------| 70.00% [7/10
Epoch 1/2 : |--------| 80.00% [8/10
Epoch 1/2 : |----| 90.00% [9/10
Epoch 1/2 : || 100.00% [10/10
0 2.625426 0.868738 0.718750 0.281250 00:53
Epoch 2/2 : |----------------------------------------| 0.00% [0/36
Epoch 2/2 : |---------------------------------------| 2.78% [1/36
Epoch 2/2 : |--------------------------------------| 5.56% [2/36
Epoch 2/2 : |-------------------------------------| 8.33% [3/36
Epoch 2/2 : |------------------------------------| 11.11% [4/36
Epoch 2/2 : |-----------------------------------| 13.89% [5/36
Epoch 2/2 : |----------------------------------| 16.67% [6/36
Epoch 2/2 : |---------------------------------| 19.44% [7/36
Epoch 2/2 : |--------------------------------| 22.22% [8/36
Epoch 2/2 : |------------------------------| 25.00% [9/36
Epoch 2/2 : |-----------------------------| 27.78% [10/36
Epoch 2/2 : |----------------------------| 30.56% [11/36
Epoch 2/2 : |---------------------------| 33.33% [12/36
Epoch 2/2 : |--------------------------| 36.11% [13/36
Epoch 2/2 : |-------------------------| 38.89% [14/36
Epoch 2/2 : |------------------------| 41.67% [15/36
Epoch 2/2 : |-----------------------| 44.44% [16/36
Epoch 2/2 : |----------------------| 47.22% [17/36
Epoch 2/2 : |--------------------| 50.00% [18/36
Epoch 2/2 : |-------------------| 52.78% [19/36
Epoch 2/2 : |------------------| 55.56% [20/36
Epoch 2/2 : |-----------------| 58.33% [21/36
Epoch 2/2 : |----------------| 61.11% [22/36
Epoch 2/2 : |---------------| 63.89% [23/36
Epoch 2/2 : |--------------| 66.67% [24/36
Epoch 2/2 : |-------------| 69.44% [25/36
Epoch 2/2 : |------------| 72.22% [26/36
Epoch 2/2 : |----------| 75.00% [27/36
Epoch 2/2 : |---------| 77.78% [28/36
Epoch 2/2 : |--------| 80.56% [29/36
Epoch 2/2 : |-------| 83.33% [30/36
Epoch 2/2 : |------| 86.11% [31/36
Epoch 2/2 : |-----| 88.89% [32/36
Epoch 2/2 : |----| 91.67% [33/36
Epoch 2/2 : |---| 94.44% [34/36
Epoch 2/2 : |--| 97.22% [35/36
Epoch 2/2 : || 100.00% [36/36
Epoch 2/2 : |----------------------------------------| 0.00% [0/10
Epoch 2/2 : |------------------------------------| 10.00% [1/10
Epoch 2/2 : |--------------------------------| 20.00% [2/10
Epoch 2/2 : |----------------------------| 30.00% [3/10
Epoch 2/2 : |------------------------| 40.00% [4/10
Epoch 2/2 : |--------------------| 50.00% [5/10
Epoch 2/2 : |----------------| 60.00% [6/10
Epoch 2/2 : |------------| 70.00% [7/10
Epoch 2/2 : |--------| 80.00% [8/10
Epoch 2/2 : |----| 90.00% [9/10
Epoch 2/2 : || 100.00% [10/10
1 1.787133 0.778699 0.750000 0.250000 00:53
Hide/Show the code
one_cycle
epoch train_loss valid_loss accuracy error_rate
1 0 2.625426 0.8687379 0.71875 0.28125
2 1 1.787133 0.7786992 0.75000 0.25000
We may dig a bit deeper in training performances by loading the best model, here model_1.pth
, and display some metrics for each species.
Hide/Show the code
$load("model_1") learn
Sequential(
(0): Sequential(
(0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(7): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(1): Sequential(
(0): AdaptiveConcatPool2d(
(ap): AdaptiveAvgPool2d(output_size=1)
(mp): AdaptiveMaxPool2d(output_size=1)
)
(1): fastai.layers.Flatten(full=False)
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Dropout(p=0.25, inplace=False)
(4): Linear(in_features=1024, out_features=512, bias=False)
(5): ReLU(inplace=True)
(6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): Dropout(p=0.5, inplace=False)
(8): Linear(in_features=512, out_features=12, bias=False)
)
)
Hide/Show the code
<- ClassificationInterpretation_from_learner(learn)
interp $print_classification_report() interp
We may extract the categories that get the most confused.
Hide/Show the code
%>% most_confused() interp
V1 V2 V3
1 humain vehicule 4
2 chasseur humain 2
3 chevreuil chien 2
4 chien sangliers 2
5 humain chasseur 2
6 chamois chevreuil 1
7 chamois chien 1
8 chamois renard 1
9 chat renard 1
10 chevreuil renard 1
11 chien chasseur 1
12 humain lievre 1
13 lievre renard 1
14 lynx renard 1
15 renard humain 1
16 renard sangliers 1
17 vehicule humain 1
6.3 Transferability
In this section, we show how to use our freshly trained model to label images that were taken in another study site in the Ain county, and not used to train our model. First, we get the path to the images.
Hide/Show the code
<- list.files(path = "pix/pixAin",
fls full.names = TRUE,
recursive = TRUE)
Then we carry out prediction, and compare to the truth.
Hide/Show the code
<- character(3)
predicted <- interp$vocab %>% as.character() %>%
categories str_replace_all("[[:punct:]]", " ") %>%
str_trim() %>%
str_split(" ") %>%
unlist()
for (i in 1:length(fls)){
<- learn %>% predict(fls[i]) # make prediction
result 3]] %>% as.character() %>%
result[[str_extract("\\d+") %>%
as.integer() -> index # extract relevant info
<- categories[index + 1] # match it with categories
predicted[i]
}data.frame(truth = c("lynx", "roe deer", "wild boar"),
prediction = predicted) %>%
kable() %>%
kable_styling()
truth | prediction |
---|---|
lynx | renard |
roe deer | chevreuil |
wild boar | sangliers |
References
Session information
R version 4.2.2 (2022-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 22.04.1 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so
locale:
[1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8
[4] LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8 LC_MESSAGES=C.UTF-8
[7] LC_PAPER=C.UTF-8 LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices datasets utils methods base
other attached packages:
[1] reticulate_1.28 exifr_0.3.2 unmarked_1.2.5 cvms_1.3.9
[5] janitor_2.1.0 highcharter_0.9.4 fastai_2.2.0 ggtext_0.1.2
[9] wesanderson_0.3.6 kableExtra_1.3.4 stringi_1.7.12 lubridate_1.9.0
[13] timechange_0.2.0 cowplot_1.1.1 sf_1.0-9 forcats_0.5.2
[17] stringr_1.5.0 dplyr_1.1.0 purrr_1.0.1 readr_2.1.3
[21] tidyr_1.3.0 tibble_3.1.8 ggplot2_3.4.0 tidyverse_1.3.2
loaded via a namespace (and not attached):
[1] minqa_1.2.5 googledrive_2.0.0 colorspace_2.1-0
[4] ggsignif_0.6.4 ellipsis_0.3.2 class_7.3-21
[7] snakecase_0.11.0 markdown_1.4 fs_1.6.0
[10] gridtext_0.1.5 rstudioapi_0.14 proxy_0.4-27
[13] ggpubr_0.5.0 farver_2.1.1 bit64_4.0.5
[16] fansi_1.0.4 xml2_1.3.3 splines_4.2.2
[19] knitr_1.41 rlist_0.4.6.2 jsonlite_1.8.4
[22] nloptr_2.0.3 broom_1.0.3 dbplyr_2.3.0
[25] png_0.1-8 compiler_4.2.2 httr_1.4.4
[28] backports_1.4.1 assertthat_0.2.1 Matrix_1.5-3
[31] fastmap_1.1.0 gargle_1.2.1 cli_3.6.0
[34] htmltools_0.5.4 tools_4.2.2 igraph_1.3.5
[37] gtable_0.3.1 glue_1.6.2 rappdirs_0.3.3
[40] Rcpp_1.0.10 carData_3.0-5 cellranger_1.1.0
[43] vctrs_0.5.2 nlme_3.1-161 svglite_2.1.1
[46] xfun_0.36 lme4_1.1-31 rvest_1.0.3
[49] lifecycle_1.0.3 renv_0.16.0 rstatix_0.7.1
[52] googlesheets4_1.0.1 MASS_7.3-58.2 zoo_1.8-11
[55] scales_1.2.1 vroom_1.6.0 hms_1.1.2
[58] parallel_4.2.2 RColorBrewer_1.1-3 yaml_2.3.6
[61] quantmod_0.4.20 curl_5.0.0 pbapply_1.7-0
[64] highr_0.10 e1071_1.7-12 checkmate_2.1.0
[67] TTR_0.24.3 boot_1.3-28.1 commonmark_1.8.1
[70] rlang_1.0.6 pkgconfig_2.0.3 systemfonts_1.0.4
[73] evaluate_0.20 lattice_0.20-45 labeling_0.4.2
[76] htmlwidgets_1.6.1 bit_4.0.5 tidyselect_1.2.0
[79] plyr_1.8.8 magrittr_2.0.3 R6_2.5.1
[82] generics_0.1.3 DBI_1.1.3 pillar_1.8.1
[85] haven_2.5.1 withr_2.5.0 units_0.8-1
[88] xts_0.12.2 abind_1.4-5 car_3.1-1
[91] modelr_0.1.10 crayon_1.5.2 KernSmooth_2.23-20
[94] utf8_1.2.2 tzdb_0.3.0 rmarkdown_2.19
[97] jpeg_0.1-10 grid_4.2.2 readxl_1.4.1
[100] data.table_1.14.6 reprex_2.0.2 digest_0.6.31
[103] classInt_0.4-8 webshot_0.5.4 munsell_0.5.0
[106] viridisLite_0.4.1
Acknowledgments
We warmly thank Mathieu Massaviol, Remy Dernat and Khalid Belkhir for their help in using GPU machines on the Montpellier Bioinformatics Biodiversity platform, Julien Renoult for helpful discussions, Delphine Dinouart and Chloé Quillard for their precious help in manually tagging the images, and Vincent Miele for having inspired this work, and his help and support along the way. We also thank the staff of the Federations of Hunters from the Jura and Ain counties, hunters who helped to find locations for camera traps and volunteers who contributed in collecting data. Our thanks also go to Hannah Clipp, Chris Rota and Ken Kellner for sharing a development version of unmarked, and an unpublished version of their paper. The Lynx Predator Prey Program was funded by Auvergne-Rhône-Alpes Region, Ain and Jura departmental Councils, The French National Federation of Hunters, French Environmental Ministry based in Auvergne-Rhone-Alpes and Bourgogne Franche-Comté Region and the French Office for Biodiversity. Our work was also partly funded by the French National Research Agency (grant ANR-16-CE02-0007).
Reuse
Citation
@article{gimenez2022,
author = {Olivier Gimenez and Maëlis Kervellec and Jean-Baptiste
Fanjul and Anna Chaine and Lucile Marescot and Yoann Bollet and
Christophe Duchamp},
publisher = {Société Française de Statistique},
title = {Trade-Off Between Deep Learning for Species Identification
and Inference about Predator-Prey Co-Occurrence},
journal = {Computo},
date = {22-04-22},
url = {https://computo.sfds.asso.fr/published-202204-deeplearning-occupancy-lynx},
doi = {10.57750/yfm2-5f45},
issn = {2824-7795},
langid = {en},
abstract = {Deep learning is used in computer vision problems with
important applications in several scientific fields. In ecology for
example, there is a growing interest in deep learning for
automatizing repetitive analyses on large amounts of images, such as
animal species identification. However, there are challenging issues
toward the wide adoption of deep learning by the community of
ecologists. First, there is a programming barrier as most algorithms
are written in `Python` while most ecologists are versed in `R`.
Second, recent applications of deep learning in ecology have focused
on computational aspects and simple tasks without addressing the
underlying ecological questions or carrying out the statistical data
analysis to answer these questions. Here, we showcase a reproducible
`R` workflow integrating both deep learning and statistical models
using predator-prey relationships as a case study. We illustrate
deep learning for the identification of animal species on images
collected with camera traps, and quantify spatial co-occurrence
using multispecies occupancy models. Despite average model
classification performances, ecological inference was similar
whether we analysed the ground truth dataset or the classified
dataset. This result calls for further work on the trade-offs
between time and resources allocated to train models with deep
learning and our ability to properly address key ecological
questions with biodiversity monitoring. We hope that our
reproducible workflow will be useful to ecologists and applied
statisticians.}
}