Implementations must return a logical vector of TRUE/FALSE
for
the subset such that, given the full A matrix and values output,
A[, subset, drop = FALSE]
and values[subset]
(or values[subset, , drop = FALSE]
for data.frame values) are equal
to the inla_f = TRUE
version of A and values. The default method uses
the ibm_values
output to construct the subset indexing.
Methods (by class)
ibm_inla_subset(default)
: Uses the [ibm_values()] output to construct the inla subset indexing as the difference between
inla_f=FALSEand
inla_f=TRUE. Extra arguments such as
multiare passed on to [ibm_values()]. This means it supports both regular vector values and
multi=1` data.frame values.
See also
Other mapper methods:
bru_mapper_generics
,
ibm_eval()
,
ibm_eval2()
,
ibm_invalid_output()
,
ibm_is_linear()
,
ibm_jacobian()
,
ibm_linear()
,
ibm_n()
,
ibm_n_output()
,
ibm_names()
,
ibm_simplify()
,
ibm_values()