LGCPs - Spatial covariates
David Borchers and Finn Lindgren
Generated on 2023-12-03
Source:vignettes/articles/2d_lgcp_covars.Rmd
2d_lgcp_covars.Rmd
Set things up
library(INLA)
library(inlabru)
library(fmesher)
library(RColorBrewer)
library(ggplot2)
bru_safe_sp(force = TRUE)
bru_options_set(control.compute = list(dic = TRUE)) # Activate DIC output
Introduction
We are going to fit spatial models to the gorilla data, using factor
and continuous explanatory variables in this practical. We will fit one
using the factor variable vegetation
, the other using the
continuous covariate elevation
(Jump to the bottom of the practical if you want to start gently with a 1D example!)
Get the data
data(gorillas, package = "inlabru")
This dataset is a list (see help(gorillas)
for details.
Extract the objects you need from the list, for convenience:
nests <- gorillas$nests
mesh <- gorillas$mesh
boundary <- gorillas$boundary
gcov <- gorillas$gcov
Factor covariates
Look at the vegetation type, nests and boundary:
Or, with the mesh:
A model with vegetation type only
It seems that vegetation type might be a good predictor because
nearly all the nests fall in vegetation type Primary
. So we
construct a model with vegetation type as a fixed effect. To do this, we
need to tell ‘lgcp’ how to find the vegetation type at any point in
space, and we do this by creating model components with a fixed effect
that we call vegetation
(we could call it anything), as
follows:
comp1 <- coordinates ~ vegetation(gcov$vegetation, model = "factor_full") - 1
Notes: * We need to tell ‘lgcp’ that this is a factor fixed effect,
which we do with model="factor_full"
, giving one
coefficient for each factor level. * We need to be careful about
overparameterisation when using factors. Unlike regression models like
‘lm()’, ‘glm()’ or ‘gam()’, ‘lgcp()’, inlabru
does not
automatically remove the first level and absorb it into an intercept.
Instead, we can either use model="factor_full"
without an
intercept, or model="factor_contrast"
, which does remove
the first level.
comp1alt <- coordinates ~ vegetation(gcov$vegetation, model = "factor_contrast") + Intercept(1)
Fit the model as usual:
Predict the intensity, and plot the median intensity surface. (In older versions, predicting takes some time because we did not have vegetation values outside the mesh so ‘inlabru’ needed to predict these first. Since v2.0.0, the vegetation has been pre-extended.)
The predidct
function of inlabru
takes into
its data
argument a SpatialPointsDataFrame
, a
SpatialPixelsDataFrame
or a data.frame
. We can
use the inlabru
function pixels
to generate a
SpatialPixelsDataFrame
only within the boundary, using its
mask
argument, as shown below.
pred.df <- fm_pixels(mesh, mask = boundary, format = "sp")
int1 <- predict(fit1, pred.df, ~ exp(vegetation))
ggplot() +
gg(int1) +
gg(boundary, alpha = 0, lwd = 2) +
gg(nests, color = "DarkGreen")
Not surprisingly, given that most nests are in Primary
vegetation, the high density is in this vegetation. But there are
substantial patches of predicted high density that have no nests, and
some areas of predicted low density that have nests. What about the
estimated abundance (there are really 647 nests there):
A model with vegetation type and a SPDE type smoother
Lets try to explain
the pattern in nest distribution
that is not captured by the vegetation covariate, using an SPDE:
pcmatern <- inla.spde2.pcmatern(mesh,
prior.sigma = c(0.1, 0.01),
prior.range = c(0.1, 0.01)
)
comp2 <- coordinates ~
-1 +
vegetation(gcov$vegetation, model = "factor_full") +
mySmooth(coordinates, model = pcmatern)
And plot the median intensity surface
int2 <- predict(fit2, pred.df, ~ exp(mySmooth + vegetation), n.samples = 1000)
ggplot() +
gg(int2, aes(fill = q0.025)) +
gg(boundary, alpha = 0, lwd = 2) +
gg(nests)
… and the expected integrated intensity (mean of abundance)
Lambda2 <- predict(
fit2,
fm_int(mesh, boundary),
~ sum(weight * exp(mySmooth + vegetation))
)
Lambda2
#> mean sd q0.025 q0.5 q0.975 median mean.mc_std_err
#> 1 683.0676 27.58948 630.5915 678.7792 732.9523 678.7792 2.758948
#> sd.mc_std_err
#> 1 1.82848
Look at the contributions to the linear predictor from the SPDE and from vegetation:
lp2 <- predict(fit2, pred.df, ~ list(
smooth_veg = mySmooth + vegetation,
smooth = mySmooth,
veg = vegetation
))
The function scale_fill_gradientn
sets the scale for the
plot legend. Here we set it to span the range of the three linear
predictor components being plotted (medians are plotted by default).
lprange <- range(lp2$smooth_veg$median, lp2$smooth$median, lp2$veg$median)
csc <- scale_fill_gradientn(colours = brewer.pal(9, "YlOrRd"), limits = lprange)
plot.lp2 <- ggplot() +
gg(lp2$smooth_veg) +
csc +
theme(legend.position = "bottom") +
gg(boundary, alpha = 0) +
ggtitle("mySmooth + vegetation")
plot.lp2.spde <- ggplot() +
gg(lp2$smooth) +
csc +
theme(legend.position = "bottom") +
gg(boundary, alpha = 0) +
ggtitle("mySmooth")
plot.lp2.veg <- ggplot() +
gg(lp2$veg) +
csc +
theme(legend.position = "bottom") +
gg(boundary, alpha = 0) +
ggtitle("vegetation")
multiplot(plot.lp2, plot.lp2.spde, plot.lp2.veg, cols = 3)
A model with SPDE only
Do we need vegetation at all? Fit a model with only an SPDE + Intercept, and choose between models on the basis of DIC, using ‘deltaIC()’.
comp3 <- coordinates ~ mySmooth(coordinates, model = pcmatern) + Intercept(1)
fit3 <- lgcp(comp3,
data = nests,
samplers = boundary,
domain = list(coordinates = mesh)
)
int3 <- predict(fit3, pred.df, ~ exp(mySmooth + Intercept))
ggplot() +
gg(int3) +
gg(boundary, alpha = 0) +
gg(nests)
Lambda3 <- predict(
fit3,
fm_int(mesh, boundary),
~ sum(weight * exp(mySmooth + Intercept))
)
Lambda3
#> mean sd q0.025 q0.5 q0.975 median mean.mc_std_err
#> 1 670.5688 28.82681 613.7787 670.9052 727.199 670.9052 2.882681
#> sd.mc_std_err
#> 1 1.945967
Model | DIC | Delta.DIC |
---|---|---|
fit1 | -562.5418 | 0.000 |
fit3 | 524.2575 | 1086.799 |
fit2 | 618.6010 | 1181.143 |
NOTE: the behaviour of DIC is currently a bit unclear, and is being investigated. WAIC is related to leave-one-out cross-validation, and is not appropriate to use with the current current LGCP likelihood implementation.
Classic mode:
Model | DIC | Delta.DIC |
---|---|---|
fit2 | 2224.131 | 0.00000 |
fit3 | 2274.306 | 50.17504 |
fit1 | 3124.784 | 900.65339 |
Experimental mode:
Model | DIC | Delta.DIC |
---|---|---|
fit1 | -563.3583 | 0.000 |
fit3 | 509.4010 | 1072.759 |
fit2 | 597.6459 | 1161.004 |
CV and SPDE parameters for Model 2
We are going with Model fit2
. Lets look at the spatial
distribution of the coefficient of variation
Plot the vegetation “fixed effect” posteriors. First get their names
- from $marginals.random$vegetation
of the fitted object,
which contains the fixed effect marginal distribution data
flist <- vector("list", NROW(fit2$summary.random$vegetation))
for (i in seq_along(flist)) flist[[i]] <- plot(fit2, "vegetation", index = i)
multiplot(plotlist = flist, cols = 3)
Use spde.posterior( )
to obtain and then plot the SPDE
parameter posteriors and the Matern correlation and covariance functions
for this model.
spde.range <- spde.posterior(fit2, "mySmooth", what = "range")
spde.logvar <- spde.posterior(fit2, "mySmooth", what = "log.variance")
range.plot <- plot(spde.range)
var.plot <- plot(spde.logvar)
multiplot(range.plot, var.plot)
corplot <- plot(spde.posterior(fit2, "mySmooth", what = "matern.correlation"))
covplot <- plot(spde.posterior(fit2, "mySmooth", what = "matern.covariance"))
multiplot(covplot, corplot)
Continuous covariates
Now lets try a model with elevation as a (continuous) explanatory variable. (First centre elevations for more stable fitting.)
elev <- gcov$elevation
elev$elevation <- elev$elevation - mean(elev$elevation, na.rm = TRUE)
ggplot() +
gg(elev) +
gg(boundary, alpha = 0)
The elevation variable here is of class ‘SpatialGridDataFrame’, that
can be handled in the same way as the vegetation covariate. However,
since in some cases data may be stored differently, other methods are
needed to access the stored values, or there’s some post-processing to
be done. In such cases, we can define a function that knows how to
evaluate the covariate at arbitrary points in the survey region, and
call that function in the component definition. In this case, we can use
a powerful method from the ‘sp’ package to do this. We use this to
create the needed function. The method eval_spatial()
is
the method that handles this automatically, and supports
terra
SpatRaster
and sf
geometry
points objects, and mismatching coordinate systems as well. In the
following evaluator example function, we only add infilling of missing
values as a post-processing step.
f.elev <- function(where) {
# Extract the values
v <- eval_spatial(elev, where, layer = "elevation")
# Fill in missing values
if (any(is.na(v))) {
v <- bru_fill_missing(elev, where, v)
}
return(v)
}
For brevity we are not going to consider models with elevation only, with elevation and a SPDE, and with SPDE only. We will just fit one with elevation and SPDE. We create our model to pass to lgcp thus:
matern <- inla.spde2.pcmatern(mesh,
prior.sigma = c(0.1, 0.01),
prior.range = c(0.1, 0.01)
)
ecomp <- coordinates ~ elev(f.elev(.data.), model = "linear") +
mySmooth(coordinates, model = matern) + Intercept(1)
Note how the elevation effect is defined. When we used the
Spatial
grid object directly (causing inlabru
to automatically call eval_spatial()
) we specified it
like
vegetation(gcov$vegetation, model = "factor_full")
whereas with the function method we specify the covariate like this:
elev(f.elev(.data.), model = "linear")
We also now include an intercept term.
The model is fitted in the usual way:
Summary and model selection
summary(efit)
#> inlabru version: 2.10.0.9000
#> INLA version: 23.11.26
#> Components:
#> elev: main = linear(f.elev(.data.)), group = exchangeable(1L), replicate = iid(1L)
#> mySmooth: main = spde(coordinates), group = exchangeable(1L), replicate = iid(1L)
#> Intercept: main = linear(1), group = exchangeable(1L), replicate = iid(1L)
#> Likelihoods:
#> Family: 'cp'
#> Data class: 'SpatialPointsDataFrame'
#> Predictor: coordinates ~ .
#> Time used:
#> Pre = 0.347, Running = 8.04, Post = 0.318, Total = 8.7
#> Fixed effects:
#> mean sd 0.025quant 0.5quant 0.975quant mode kld
#> elev 0.004 0.001 0.002 0.004 0.006 0.004 0
#> Intercept 1.132 0.477 0.160 1.143 2.042 1.143 0
#>
#> Random effects:
#> Name Model
#> mySmooth SPDE2 model
#>
#> Model hyperparameters:
#> mean sd 0.025quant 0.5quant 0.975quant mode
#> Range for mySmooth 1.76 0.206 1.393 1.75 2.20 1.720
#> Stdev for mySmooth 1.01 0.086 0.856 1.00 1.19 0.991
#>
#> Deviance Information Criterion (DIC) ...............: 518.63
#> Deviance Information Criterion (DIC, saturated) ....: NA
#> Effective number of parameters .....................: -827.49
#>
#> Watanabe-Akaike information criterion (WAIC) ...: 3097.09
#> Effective number of parameters .................: 899.76
#>
#> Marginal log-Likelihood: -1255.01
#> is computed
#> Posterior summaries for the linear predictor and the fitted values are computed
#> (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')
deltaIC(fit1, fit2, fit3, efit)
#> Model DIC Delta.DIC
#> 1 fit1 -562.5418 0.000
#> 2 efit 518.6271 1081.169
#> 3 fit3 524.2575 1086.799
#> 4 fit2 618.6010 1181.143
Predict and plot the density
e.int <- predict(efit, pred.df, ~ exp(mySmooth + elev + Intercept))
e.int.log <- predict(efit, pred.df, ~ (mySmooth + elev + Intercept))
ggplot() +
gg(e.int, aes(fill = log(sd))) +
gg(boundary, alpha = 0) +
gg(nests, shape = "+")
ggplot() +
gg(e.int.log, aes(fill = exp(mean + sd^2 / 2))) +
gg(boundary, alpha = 0) +
gg(nests, shape = "+")
Now look at the elevation and SPDE effects in space. Leave out the Intercept because it swamps the spatial effects of elevation and the SPDE in the plots and we are interested in comparing the effects of elevation and the SPDE.
First we need to predict on the linear predictor scale.
e.lp <- predict(
efit,
pred.df,
~ list(
smooth_elev = mySmooth + elev,
elev = elev,
smooth = mySmooth
)
)
The code below, which is very similar to that used for the vegetation factor variable, produces the plots we want.
lprange <- range(e.lp$smooth_elev$mean, e.lp$elev$mean, e.lp$smooth$mean)
library(RColorBrewer)
csc <- scale_fill_gradientn(colours = brewer.pal(9, "YlOrRd"), limits = lprange)
plot.e.lp <- ggplot() +
gg(e.lp$smooth_elev, mask = boundary) +
csc +
theme(legend.position = "bottom") +
gg(boundary, alpha = 0) +
ggtitle("SPDE + elevation")
plot.e.lp.spde <- ggplot() +
gg(e.lp$smooth, mask = boundary) +
csc +
theme(legend.position = "bottom") +
gg(boundary, alpha = 0) +
ggtitle("SPDE")
plot.e.lp.elev <- ggplot() +
gg(e.lp$elev, mask = boundary) +
csc +
theme(legend.position = "bottom") +
gg(boundary, alpha = 0) +
ggtitle("elevation")
multiplot(plot.e.lp,
plot.e.lp.spde,
plot.e.lp.elev,
cols = 3
)
You might also want to look at the posteriors of the fixed effects and of the SPDE. Adapt the code used for the vegetation factor to do this.
LambdaE <- predict(
efit,
fm_int(mesh, boundary),
~ sum(weight * exp(Intercept + elev + mySmooth))
)
LambdaE
#> mean sd q0.025 q0.5 q0.975 median mean.mc_std_err
#> 1 669.4047 25.16072 628.5246 667.963 713.6262 667.963 2.516072
#> sd.mc_std_err
#> 1 1.836454
flist <- vector("list", NROW(efit$summary.fixed))
for (i in seq_along(flist)) {
flist[[i]] <- plot(efit, rownames(efit$summary.fixed)[i])
}
multiplot(plotlist = flist, cols = 2)
Plot the SPDE parameter posteriors and the Matern correlation and covariance functions for this model.
spde.range <- spde.posterior(efit, "mySmooth", what = "range")
spde.logvar <- spde.posterior(efit, "mySmooth", what = "log.variance")
range.plot <- plot(spde.range)
var.plot <- plot(spde.logvar)
multiplot(range.plot, var.plot)
corplot <- plot(spde.posterior(efit, "mySmooth", what = "matern.correlation"))
covplot <- plot(spde.posterior(efit, "mySmooth", what = "matern.covariance"))
multiplot(covplot, corplot)
Also estimate abundance. The data.frame
in the second
call leads to inclusion of N
in the prediction object, for
easier plotting.
Lambda <- predict(
efit, fm_int(mesh, boundary),
~ sum(weight * exp(mySmooth + elev + Intercept))
)
Lambda
#> mean sd q0.025 q0.5 q0.975 median mean.mc_std_err
#> 1 669.5452 30.31325 613.6689 666.8134 728.8881 666.8134 3.031325
#> sd.mc_std_err
#> 1 2.124781
Nest.e <- predict(
efit,
fm_int(mesh, boundary),
~ data.frame(
N = 200:1000,
density = dpois(200:1000,
lambda = sum(weight * exp(mySmooth + elev + Intercept))
)
),
n.samples = 2000
)
Plot in the same way as in previous practicals
Nest.e$plugin_estimate <- dpois(Nest.e$N, lambda = Lambda$median)
ggplot(data = Nest.e) +
geom_line(aes(x = N, y = mean, colour = "Posterior")) +
geom_line(aes(x = N, y = plugin_estimate, colour = "Plugin"))
Non-spatial evaluation of the covariate effect
The previous examples of posterior prediction focused on spatial
prediction. From inlabru
version 2.2.8, a feature is
available for overriding the component input value specification from
the component definition. Each model component can be evaluated
directly, for arbitrary values by functions named by adding the suffix
_eval
to the end of the component name in the predictor
expression, and disabling normal component evaluation for all components
with include = character(0)
(since we’re both bypassing the
normal input to the elev
component, and not supplying data
for the other components). From version 2.8.0
,
inlabru
attempts to automatically detect which model
components are used in the expression, and the include
argument can usually be left out entirely.
Since the elevation effect in this model is linear, the resulting plot isn’t very interesting, but the same method can be applied to non-linear effects as well, and combined into general R expressions.
elev.pred <- predict(
efit,
data.frame(elevation = seq(0, 100, length.out = 1000)),
formula = ~ elev_eval(elevation),
include = character(0) # Not needed from version 2.8.0
)
ggplot(elev.pred) +
geom_line(aes(elevation, mean)) +
geom_ribbon(
aes(elevation,
ymin = q0.025,
ymax = q0.975
),
alpha = 0.2
) +
geom_ribbon(
aes(elevation,
ymin = mean - 1 * sd,
ymax = mean + 1 * sd
),
alpha = 0.2
)
A 1D Example
Try fitting a 1-dimensional model to the point data in the
inlabru
dataset Poisson2_1D
. This comes with a
covariate function called cov2_1D
. Try to reproduce the
plot below (used in lectures) showing the effects of the
Intercept + z
and the SPDE
. (You may find it
helpful to build on the model you fitted in the previous practical,
adding the covariate to the model specification.)
data(Poisson2_1D)
ss <- seq(0, 55, length = 200)
z <- cov2_1D(ss)
x <- seq(1, 55, length = 100)
mesh <- fm_mesh_1d(x, degree = 1)
comp <- x ~
beta_z(cov2_1D(x), model = "linear") +
spde1D(x, model = inla.spde2.matern(mesh)) +
Intercept(1)
fitcov1D <- lgcp(comp, pts2, domain = list(x = mesh))
pr.df <- data.frame(x = x)
prcov1D <- predict(
fitcov1D,
pr.df,
~ list(
total = exp(beta_z + spde1D + Intercept),
fx = exp(beta_z + Intercept),
spde = exp(spde1D)
)
)
ggplot() +
gg(prcov1D$total, color = "red") +
geom_line(aes(x = prcov1D$spde$x, y = prcov1D$spde$median), col = "blue", lwd = 1.25) +
geom_line(aes(x = prcov1D$fx$x, y = prcov1D$fx$median), col = "green", lwd = 1.25) +
geom_point(data = pts2, aes(x = x), y = 0.2, shape = "|", cex = 4) +
xlab(expression(bold(s))) +
ylab(expression(hat(lambda)(bold(s)) ~ ~"and its components")) +
annotate(geom = "text", x = 40, y = 6, label = "Intensity", color = "red") +
annotate(geom = "text", x = 40, y = 5.5, label = "z-effect", color = "green") +
annotate(geom = "text", x = 40, y = 5, label = "SPDE", color = "blue")