Proper posterior prediction scores
A prediction score evaluates some measure of closeness between a prediction distribution identified by , and an observed value .
A basic score, and motivating remarks
A common score is the Squared Error, and we would like predictions to have low values for , indicating a “good” prediction, in that specific sense that puts a penalty on the squared deviation of the prediction mean from the true observed value. We can imagine constructing other such scoring functions that penalise other aspects of the prediction.
A score where “lower is better” is called negatively oriented, and a score where “higher is better” is called positively oriented. One can always turn one type of score into the other by changing the sign, so to simplify the presentation, we’ll make all scores be negatively oriented, like the squared error.
We often care about the prediction uncertainty and not just the mean (or at least we should care!). Just adding a prediction variance penalty to the squared error wouldn’t be useful, as we could then construct a new, “better”, prediction by reducing the stated prediction variance to zero. This would understate the real prediction uncertainty, so wouldn’t be a fair scoring approach for comparing different prediction models. In the next section, we make this fairness idea more precise.
Proper and strictly proper scores
The expected value under a distribution identified by is denoted . For a negatively oriented score, we seek scoring functions that are fair, in the sense that one cannot, on average, make a better prediction that that which generated the data. This requires for all predictive distributions and any distribution . Such scores are call proper. If in addition, equality of the score expectations only hold when , the score is strictly proper.
Non-strict proper scores ignore some aspect of the prediction, typically by only being sensitive to some summary information, such as the mean, median, and/or variance.
It’s notable that proper scores retain their properness under affine transformations, with just potential changes in whether they are positively or negatively oriented. If is a proper score, then is also a proper score, with the same orientation if and the opposite orientation if . The degenerate case gives the score to all predictions, which is technically a proper score (you cannot to better than an ideal prediction), but a useless one (an ideal prediction is no better than any other prediction).
Examples
log-score: , where is a predictive pdf or pmf for , is a strictly proper score.
Squared Error: is a proper score
-
Brier score:
- Binary events: where is a binary event indicator, is a strictly proper score for the event prediction, but non-strict with respect to any underlying outcome generating the event indicator, e.g. via .
- Class indicator events: if is a class category outcome, the Brier score can be generalised to which can be seen as the Squared Error for the Multinomial prediction model for , with , where . Sometimes, the sum is normalised by . This generalised Brier score is a proper score.
Dawid-Sebastiani: is a proper score. It’s derived from the strictly proper log-score of a Gaussian prediction, but it’s also a non-strict proper score for other distributions. It has the advantage that it only involves the predictive mean and variance, making it computable also in cases when log-densities are hard to obtain. Since it’s based on the symmetric Gaussian distribution, it tends to be affected by skewness, so should be applied with care in such cases.
Absolute (Median) Error: is a proper score, with expectation minimised when the medians of and match. Note that , the absolute error with respect to the expectation, is not a proper score! Another way of expressing this is that if is a proper score with respect to the median, i.e. it is proper when is taken to be the median of , and not some other point prediction. In the applied literature, this distinction is often overlooked, and the predictive mean is inserted into both the SE and AE scores, making the resulting AE score comparisons less clear than they could be.
-
CRPS (Continuous Ranked Probability Score):
This is a strictly proper score, related to the absolute error of point predictions.
Other scores include the Interval score that is minimised for short prediction intervals with the intended coverage probability, and the Quantile score, that generalises the Absolute Median Error to other quantiles than the median.
Improper scores
We’ve seen that some scores are strictly proper, and others are only proper scores, sensitive to specific aspects of the predictive distribution, such as mean, median, and/or variance.
In contrast, improper scores do not fulfil the fairness idea. Such scores include the the aforementioned penalised squared error, , but also the probability/density function, . The latter might come as a surprise, as the log-score is proper.
Mean error/score
Up to this point, we only considered individual scores. When summarising predictions for a collection of observations , we usually compute the mean score,
When comparing two different prediction models and , the scores are dependent with respect to the observations . This means that in order to more easily handle the score variability in the comparison, we should treat it as a paired sample problem. The pairwise score differences are given by It’s also much more reasonable to make conditional independence assumptions about these differences, than for the plain score values ; but
Note that taking the average of prediction scores, or averages of prediction score differences, is quite different from assessing summary statistics of the collection of predictions, since the scores are individual for each observation; we’re not assessing the collective value distribution, as that might be misleading. For example, consider a spatial model where he estimated procession has an empirical distribution of the predictive means that matches that of the observed data. Scores based on the marginal empirical distribution would not be able to detect if the location of the values is maximally different to the actual locations, whereas averages of individual scores would be sensitive to this.
Poisson model example
Consider a model with Poisson outcomes , conditionally on a log-linear predictor , where is some linear expression in latent variables.
The posterior predictive distributions are Poisson mixture distributions across the posterior distribution of , .
Moment scores
For the Squared Error and Dawid-Sebastiani scores, we’ll need the posterior expectation and variance: and i.e. the sum of the posterior mean and variance for lambda.
The SE and DS scores are therefore relatively easy to compute after
estimating a model with inlabru
. You just need to estimate
the posterior mean and variance with predict()
for each
test data point. If eta
is an expression for the linear
predictor, and newdata
holds the covariate information for
the prediction points, run
Log-Probability and log-density scores
The full log-score can actually also be estimated/computed in a
similar way. We seek, for a fixed observation y,
The probability is
so we can estimate it using
predict()
:
pred <- predict(fit,
newdata,
formula = ~ dpois(y, rate = exp(eta)),
n.samples = 2000
)
log_score <- log(pred$mean)
to estimate the log_score (increase n.samples
if needed
for sufficiently small Monte Carlo error).
CRPS
Yet another option would be to use the CRPS, which for each
prediction value
would be
For this, one would first need to get
from a predict call with ppois(k, rate = exp(eta))
, for a
vector
,
for each
,
for some sufficiently large
for the remainder to be negligible. However, to avoid repeated
predict()
calls for each
,
the storage requirements is of order
.
To avoid that, one option would be to reformulate the estimator into a
recursive estimator, so that batches of simulations could be used to
iteratively compute the estimator.
A basic estimator can proceed as follows:
Define sufficiently large for the posterior predictive probability above to be negligible. Perhaps a value like might be sufficient. You can check afterwards, and change if needed.
- Simulate samples from
using
generate()
(size ). - For each , use the samples to estimate the residuals , for , with
3. Compute
# some large value, so that 1-F(K) is small
max_K <- ceiling(max(y)) + 4 * sqrt(max(y))
pred <- generate(fit, newdata,
formula = ~ {
lambda <- exp(eta)
k <- seq(0, max_K)
do.call(
cbind,
lapply(
seq_along(y),
function(i) {
Fpred <- ppois(k, rate = lambda[i])
data.frame(
k = c(k, k),
i = c(i, i),
type = rep(c("F", "residual"), each = length(Fpred)),
value = c(Fpred, Fpred - (y[i] <= k))
)
}
)
)
},
n.samples = 2000
)
F_estimate <-
(pred %>%
filter(type == "F") %>%
group_by(i) %>%
summarise(F = sum(mean), groups = "drop") %>%
pull("F"))
crps_score <-
(pred %>%
filter(type == "residual") %>%
group_by(i) %>%
summarise(crps = sum(mean^2), groups = "drop") %>%
pull(crps))
# Check that the cutoff point K has nearly probability mass 1 below it,
# for all i:
min(F_estimate)
Posterior expectation of conditional scores
In some cases, one might be tempted to consider posterior distribution properties of conditional predictive scores, e.g. the posterior expectation for under the posterior distribution for in the Poisson model.
For Squared Error, It’s noteworthy that this is similar to the improper score , and also in this new case, one can have a model with artificially small posterior variance with a smaller expected score, making this type of construction problematic to interpret.
However, in some cases it does provide alternative approaches for how to compute the proper scores for the full posterior predictive distributions. If , are samples from the posterior distribution, one score estimator is with the averaging over the samples inside the quadratic expression, and we can use
If we instead take advantage of the new expression above, we have so the score can be estimated by
pred <- predict(fit, newdata, formula = ~ list(
cond_scores = (y - exp(eta))^2,
lambda = exp(eta)
))
scores <- pred$cond_scores$mean - pred$lambda$sd^2
For this particular case, this approach is unlikely to be an improvement or more accurate than the basic estimator.However, for other scores there may potentially be practical benefits.
An alternative estimator for CRSP
For the CRPS score, there are closed form expressions available for some distributions, conditionally on their parameters, but not for the full predictive mixture distribution. We take a similar approach as for SE, and let and denote the unconditional and conditional cumulative distribution functions for the posterior predictive distribution. Then for all , and Note that we didn’t need to use any particular model properties here, so this holds for any predictive model with mixture structure, when is the collection of model parameters. We also note the resemblance to the alternative expression for the Squared Error; this is because CRPS can be seen as the integral over all Brier scores for predicting event indicators of the form , with probability .
In the Poisson case, we can now estimate the CRPS scores like this,
that makes the code a bit easier than the previous version that needed
generate()
. However, it can be shown that the two
approaches have nearly identical Monte Carlo variance, so the previous
version is likely preferable as it doesn’t require knowing a closed form
CRPS expression.
poisson_crps <- function(y, rate) {
# compute the CRPS score for a single y, for the given rate paramter.
}
max_K <- 100 # some large value, so that 1-F(K) is small
pred <- predict(fit, newdata,
formula = ~ {
lambda <- exp(eta)
list(
crps = vapply(
seq_along(y),
function(i) poisson_crps(y[i], lambda[i]),
0.0
),
F = do.call(
cbind,
lapply(
seq_along(y),
function(i) {
data.frame(
i = i,
F = ppois(seq(0, max_K), rate = lambda[i])
)
}
)
)
)
},
n.samples = 2000
)
crps_score <-
pred$crsp$mean -
(pred$F %>%
group_by(i) %>%
summarise(F_var = sum(sd^2), groups = "drop") %>%
pull(F_var))
Formulas and functions for Poisson CRPS, as well as for other distributions, can be found at http://cran.nexr.com/web/packages/scoringRules/vignettes/crpsformulas.html#poisson-distribution-pois