If the code in this vignette has not been evaluated, a rendered version is available on the documentation site under ‘Articles’.
library(ggplot2)
library(dplyr)
library(sdmTMB)
Let’s perform index standardization with the built-in data for Pacific cod.
glimpse(pcod)
First we will create our SPDE mesh. We will use a relatively course
mesh for a balance between speed and accuracy in this vignette
(cutoff = 10
where cutoff is in the units of X and Y (km
here) and represents the minimum distance between points before a new
mesh vertex is added). You will likely want to use a higher resolution
mesh for applied scenarios. You will want to make sure that increasing
the number of knots does not change the conclusions.
<- make_mesh(pcod, c("X", "Y"), cutoff = 10)
pcod_spde plot(pcod_spde)
Let’s fit a GLMM. Note that if you want to use this model for index
standardization then you will likely want to include
0 + as.factor(year)
or -1 + as.factor(year)
so
that there is a factor predictor that represents the mean estimate for
each time slice.
<- sdmTMB(
m data = pcod,
formula = density ~ 0 + as.factor(year),
time = "year", mesh = pcod_spde, family = tweedie(link = "log"))
We can inspect randomized quantile residuals:
$resids <- residuals(m) # randomized quantile residuals
pcod# pcod$resids <- residuals(m, type = "mle-mcmc") # better but slower
hist(pcod$resids)
qqnorm(pcod$resids)
abline(a = 0, b = 1)
ggplot(pcod, aes(X, Y, col = resids)) + scale_colour_gradient2() +
geom_point() + facet_wrap(~year) + coord_fixed()
Now we want to predict on a fine-scale grid on the entire survey
domain. There is a grid built into the package for Queen Charlotte Sound
named qcs_grid
. Our prediction grid also needs to have all
the covariates that we used in the model above.
glimpse(qcs_grid)
Now make the predictions on new data.
<- predict(m, newdata = qcs_grid, return_tmb_object = TRUE) predictions
Let’s make a small function to make maps.
<- function(dat, column) {
plot_map ggplot(dat, aes(X, Y, fill = {{ column }})) +
geom_raster() +
facet_wrap(~year) +
coord_fixed()
}
There are four kinds of predictions that we get out of the model. First we will show the predictions that incorporate all fixed effects and random effects:
plot_map(predictions$data, exp(est)) +
scale_fill_viridis_c(trans = "sqrt") +
ggtitle("Prediction (fixed effects + all random effects)")
We can also look at just the fixed effects, here year:
plot_map(predictions$data, exp(est_non_rf)) +
ggtitle("Prediction (fixed effects only)") +
scale_fill_viridis_c(trans = "sqrt")
We can look at the spatial random effects that represent consistent deviations in space through time that are not accounted for by our fixed effects. In other words, these deviations represent consistent biotic and abiotic factors that are affecting biomass density but are not accounted for in the model.
plot_map(predictions$data, omega_s) +
ggtitle("Spatial random effects only") +
scale_fill_gradient2()
And finally we can look at the spatiotemporal random effects that represent deviation from the fixed effect predictions and the spatial random effect deviations. These represent biotic and abiotic factors that are changing through time and are not accounted for in the model.
plot_map(predictions$data, epsilon_st) +
ggtitle("Spatiotemporal random effects only") +
scale_fill_gradient2()
When we ran our predict.sdmTBM()
function, it also
returned a report from TMB in the output because we included
return_tmb_object = TRUE
. We can then run our
get_index()
function to extract the total biomass
calculations and standard errors.
We will need to set the area
argument to 4
km2 since our grid cells are 2 km x 2 km. If some grid cells
were not fully in the survey domain (or were on land), we could feed a
vector of grid areas to the area
argument that matched the
number of grid cells.
<- get_index(predictions, area = 4, bias_correct = TRUE) index
ggplot(index, aes(year, est)) + geom_line() +
geom_ribbon(aes(ymin = lwr, ymax = upr), alpha = 0.4) +
xlab('Year') + ylab('Biomass estimate (kg)')
These are our biomass estimates:
mutate(index, cv = sqrt(exp(se^2) - 1)) %>%
select(-log_est, -se) %>%
::kable(format = "pandoc", digits = c(0, 0, 0, 0, 2)) knitr
We can also calculate an index for part of the survey domain. We’ll
make an index for everything south of UTM 5700 by subsetting our
prediction grid. For more complicated spatial polygons you could
intersect the polygon on the prediction grid using something like
sf::st_intersects()
.
<- qcs_grid[qcs_grid$Y < 5700, ]
qcs_grid_south <- predict(m, newdata = qcs_grid_south,
predictions_south return_tmb_object = TRUE)
<- get_index(predictions_south, area = 4, bias_correct = TRUE)
index_south head(index_south)
We can visually compare the two indexes:
mutate(index, region = "all") %>%
bind_rows(mutate(index_south, region = "south")) %>%
ggplot(aes(year, est)) +
geom_line(aes(colour = region)) +
geom_ribbon(aes(ymin = lwr, ymax = upr, fill = region), alpha = 0.4)