(Non-)Linear Tests for Null Hypotheses, Joint Hypotheses, Equivalence, Non Superiority, and Non Inferiority
Source:R/hypotheses.R
hypotheses.RdUncertainty estimates are calculated as first-order approximate standard errors for linear or non-linear functions of a vector of random variables with known or estimated covariance matrix. In that sense, hypotheses emulates the behavior of the excellent and well-established car::deltaMethod and car::linearHypothesis functions, but it supports more models; requires fewer dependencies; expands the range of tests to equivalence and superiority/inferiority; and offers convenience features like robust standard errors.
To learn more, read the hypothesis tests vignette, visit the package website:
Warning #1: Tests are conducted directly on the scale defined by the type argument. For some models, it can make sense to conduct hypothesis or equivalence tests on the "link" scale instead of the "response" scale which is often the default.
Warning #2: For hypothesis tests on objects produced by the marginaleffects package, it is safer to use the hypothesis argument of the original function. Using hypotheses() may not work in certain environments, in lists, or when working programmatically with *apply style functions.
Warning #3: The tests assume that the hypothesis expression is (approximately) normally distributed, which for non-linear functions of the parameters may not be realistic. More reliable confidence intervals can be obtained using the inferences() function with method = "boot".
Usage
hypotheses(
model = NULL,
hypothesis = NULL,
vcov = TRUE,
conf_level = NULL,
df = NULL,
equivalence = NULL,
joint = FALSE,
joint_test = "f",
multcomp = FALSE,
numderiv = "fdforward",
...
)Arguments
- model
Model object or object generated by the
comparisons(),slopes(), orpredictions()functions.- hypothesis
specify a hypothesis test or custom contrast using a number , formula, string equation, vector, matrix, or function.
Number: The null hypothesis used in the computation of Z and p (before applying
transform).String: Equation to specify linear or non-linear hypothesis tests. Two-tailed tests must include an equal
=sign. One-tailed tests must start with<or>. If the terms incoef(object)uniquely identify estimates, they can be used in the formula. Otherwise, useb1,b2, etc. to identify the position of each parameter. Theb*wildcard can be used to test hypotheses on all estimates. When the hypothesis string represents a two-sided equation, theestimatecolumn holds the value of the left side minus the right side of the equation. If a named vector is used, the names are used as labels in the output. Examples:hp = drathp + drat = 12b1 + b2 + b3 = 0b* / b1 = 1<= 0>= -3.5b1 >= 10
Formula:
lhs ~ rhs | grouplhsratio(null = 1)difference(null = 0)Leave empty for default value
rhspairwiseandrevpairwise: pairwise differences between estimates in each row.reference: differences between the estimates in each row and the estimate in the first row.sequential: difference between an estimate and the estimate in the next row.meandev: difference between an estimate and the mean of all estimates.meanotherdev: difference between an estimate and the mean of all other estimates, excluding the current one.poly: polynomial contrasts, as computed by thestats::contr.poly()function.helmert: Helmert contrasts, as computed by thestats::contr.helmert()function. Contrast 2nd level to the first, 3rd to the average of the first two, and so on.trt_vs_ctrl: difference between the mean of estimates (except the first) and the first estimate.I(fun(x)): custom function to manipulate the vector of estimatesx. The functionfun()can return multiple (potentially named) estimates.
group(optional)Column name of
newdata. Conduct hypothesis tests withing subsets of the data.
Examples:
~ poly~ sequential | groupid~ referenceratio ~ pairwisedifference ~ pairwise | groupid~ I(x - mean(x)) | groupid~ I(\(x) c(a = x[1], b = mean(x[2:3]))) | groupid
Matrix or Vector: Each column is a vector of weights. The the output is the dot product between these vectors of weights and the vector of estimates. The matrix can have column names to label the estimates.
Function:
Accepts an argument
x: object produced by amarginaleffectsfunction or a data frame with columnrowidandestimateReturns a data frame with columns
termandestimate(mandatory) androwid(optional).The function can also accept optional input arguments:
newdata,by,draws.This function approach will not work for Bayesian models or with bootstrapping. In those cases, it is easy to use
get_draws()to extract and manipulate the draws directly.
See the Examples section below and the vignette: https://marginaleffects.com/chapters/hypothesis.html
Warning: When calling
predictions()withtype="invlink(link)"(the default in some models),hypothesisis tested and p values are computed on the link scale.
- vcov
Type of uncertainty estimates to report (e.g., for robust standard errors). Acceptable values:
FALSE: Do not compute standard errors. This can speed up computation considerably.
TRUE: Unit-level standard errors using the default
vcov(model)variance-covariance matrix.String which indicates the kind of uncertainty estimates to return.
Heteroskedasticity-consistent:
"HC","HC0","HC1","HC2","HC3","HC4","HC4m","HC5". See?sandwich::vcovHCHeteroskedasticity and autocorrelation consistent:
"HAC"Mixed-Models degrees of freedom: "satterthwaite", "kenward-roger"
Other:
"NeweyWest","KernHAC","OPG". See thesandwichpackage documentation."rsample", "boot", "fwb", and "simulation" are passed to the
methodargument of theinferences()function. To customize the bootstrap or simulation process, callinferences()directly.
One-sided formula which indicates the name of cluster variables (e.g.,
~unit_id). This formula is passed to theclusterargument of thesandwich::vcovCLfunction.Square covariance matrix
Function which returns a covariance matrix (e.g.,
stats::vcov(model))
- conf_level
NULL or numeric value between 0 and 1. Confidence level to use to build a confidence interval. When
NULLandmodelwas generated bymarginaleffects, the confidence level is taken from theconf_levelattribute of the model. Otherwise, the default value is 0.95.- df
Degrees of freedom used to compute p values and confidence intervals.
A single numeric value between 1 and
Inf, or a numeric vector with length equal to the number of rows in the output. WhendfisInf, the normal distribution is used. Whendfis finite, thetdistribution is used."residual": Calls insight::get_df to extract degrees of freedom from the model automatically.
When using
joint_test="f", thedfargument should be a numeric vector of length 2.
- equivalence
Numeric vector of length 2: bounds used for the two-one-sided test (TOST) of equivalence, and for the non-inferiority and non-superiority tests. For bayesian models, this report the proportion of posterior draws in the interval and the ROPE. See Details section below.
- joint
Joint test of statistical significance. The null hypothesis value can be set using the
hypothesisargument.FALSE: Hypotheses are not tested jointly.
TRUE: All parameters are tested jointly.
String: A regular expression to match parameters to be tested jointly.
grep(joint, perl = TRUE)Character vector of parameter names to be tested. Characters refer to the names of the vector returned by
marginaleffects::get_coef(object).Integer vector of indices. Which parameters positions to test jointly.
Note: When using the
jointargument, thehypothesisargument is limited toNULLor numeric values. Users can chain multiplehypotheses()for complex joint hypothesis tests.
- joint_test
A character string specifying the type of test, either "f" or "chisq". The null hypothesis is set by the
hypothesisargument, with default null equal to 0 for all parameters.- multcomp
Logical or string. If
TRUEor string, apply multiple comparison adjustment to the p values and report family-wise confidence intervals. Valid strings: "holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "single-step", "Shaffer", "Westfall", "free". WhenmultcompisTRUE, the "holm" method is used.- numderiv
string or list of strings indicating the method to use to for the numeric differentiation used in to compute delta method standard errors.
"fdforward": finite difference method with forward differences (default)
"fdcenter": finite difference method with central differences
"richardson": Richardson extrapolation method
Extra arguments can be specified by passing a list to the
numDerivargument, with the name of the method first and named arguments following, ex:numderiv=list("fdcenter", eps = 1e-5). When an unknown argument is used,marginaleffectsprints the list of valid arguments for each method.
- ...
Additional arguments are passed to the
predict()method supplied by the modeling package.These arguments are particularly useful for mixed-effects or bayesian models (see the online vignettes on themarginaleffectswebsite). Available arguments can vary from model to model, depending on the range of supported arguments by each modeling package. See the "Model-Specific Arguments" section of the?slopesdocumentation for a non-exhaustive list of available arguments.
Joint hypothesis tests
The test statistic for the joint Wald test is calculated as (R * theta_hat - r)' * inv(R * V_hat * R') * (R * theta_hat - r) / Q, where theta_hat is the vector of estimated parameters, V_hat is the estimated covariance matrix, R is a Q x P matrix for testing Q hypotheses on P parameters, r is a Q x 1 vector for the null hypothesis, and Q is the number of rows in R. If the test is a Chi-squared test, the test statistic is not normalized.
The p-value is then calculated based on either the F-distribution (for F-test) or the Chi-squared distribution (for Chi-squared test). For the F-test, the degrees of freedom are Q and (n - P), where n is the sample size and P is the number of parameters. For the Chi-squared test, the degrees of freedom are Q.
Equivalence, Inferiority, Superiority
\(\theta\) is an estimate, \(\sigma_\theta\) its estimated standard error, and \([a, b]\) are the bounds of the interval supplied to the equivalence argument.
Non-inferiority:
\(H_0\): \(\theta \leq a\)
\(H_1\): \(\theta > a\)
\(t=(\theta - a)/\sigma_\theta\)
p: Upper-tail probability
Non-superiority:
\(H_0\): \(\theta \geq b\)
\(H_1\): \(\theta < b\)
\(t=(\theta - b)/\sigma_\theta\)
p: Lower-tail probability
Equivalence: Two One-Sided Tests (TOST)
p: Maximum of the non-inferiority and non-superiority p values.
Thanks to Russell V. Lenth for the excellent emmeans package and documentation which inspired this feature.
Examples
mod <- lm(mpg ~ hp + wt + factor(cyl), data = mtcars)
hypotheses(mod)
#>
#> Term Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> (Intercept) 35.8460 2.041 17.56 <0.001 227.0 31.8457 39.846319
#> hp -0.0231 0.012 -1.93 0.0531 4.2 -0.0465 0.000306
#> wt -3.1814 0.720 -4.42 <0.001 16.6 -4.5918 -1.771012
#> factor(cyl)6 -3.3590 1.402 -2.40 0.0166 5.9 -6.1062 -0.611803
#> factor(cyl)8 -3.1859 2.170 -1.47 0.1422 2.8 -7.4399 1.068169
#>
#>
# Test of equality between coefficients
hypotheses(mod, hypothesis = "hp = wt")
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> hp=wt 3.16 0.72 4.39 <0.001 16.4 1.75 4.57
#>
#>
# Non-linear function
hypotheses(mod, hypothesis = "exp(hp + wt) = 0.1")
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> exp(hp+wt)=0.1 -0.0594 0.0292 -2.04 0.0418 4.6 -0.117 -0.0022
#>
#>
# Robust standard errors
hypotheses(mod, hypothesis = "hp = wt", vcov = "HC3")
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> hp=wt 3.16 0.805 3.92 <0.001 13.5 1.58 4.74
#>
#>
# b1, b2, ... shortcuts can be used to identify the position of the
# parameters of interest in the output of
hypotheses(mod, hypothesis = "b2 = b3")
#> Warning:
#> It is essential to check the order of estimates when specifying hypothesis tests using positional indices like b1, b2, etc. The indices of estimates can change depending on the order of rows in the original dataset, user-supplied arguments, model-fitting package, and version of `marginaleffects`.
#>
#> It is also good practice to use assertions that ensure the order of estimates is consistent across different runs of the same code. Example:
#>
#> ```r
#> mod <- lm(mpg ~ am * carb, data = mtcars)
#>
#> # assertion for safety
#> p <- avg_predictions(mod, by = 'carb')
#> stopifnot(p$carb[1] == 1, p$carb[2] == 2)
#>
#> # hypothesis test
#> avg_predictions(mod, by = 'carb', hypothesis = 'b1 - b2 = 0')
#> ```
#>
#> Disable this warning with: `options(marginaleffects_safe = FALSE)`
#> This warning appears once per session.
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> b2=b3 3.16 0.72 4.39 <0.001 16.4 1.75 4.57
#>
#>
# wildcard
hypotheses(mod, hypothesis = "b* / b2 = 1")
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> b1/b2=1 -1551 764.0 -2.03 0.0423 4.6 -3048.9 -54
#> b2/b2=1 0 NA NA NA NA NA NA
#> b3/b2=1 137 78.1 1.75 0.0804 3.6 -16.6 290
#> b4/b2=1 144 111.0 1.30 0.1938 2.4 -73.3 362
#> b5/b2=1 137 151.9 0.90 0.3679 1.4 -161.0 435
#>
#>
# term names with special characters have to be enclosed in backticks
hypotheses(mod, hypothesis = "`factor(cyl)6` = `factor(cyl)8`")
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 %
#> `factor(cyl)6`=`factor(cyl)8` -0.173 1.65 -0.105 0.917 0.1 -3.41
#> 97.5 %
#> 3.07
#>
#>
mod2 <- lm(mpg ~ hp * drat, data = mtcars)
hypotheses(mod2, hypothesis = "`hp:drat` = drat")
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> `hp:drat`=drat -6.08 2.89 -2.1 0.0357 4.8 -11.8 -0.405
#>
#>
# predictions(), comparisons(), and slopes()
mod <- glm(am ~ hp + mpg, data = mtcars, family = binomial)
cmp <- comparisons(mod, newdata = "mean")
hypotheses(cmp, hypothesis = "b1 = b2")
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> b1=b2 -0.279 0.102 -2.72 0.00645 7.3 -0.48 -0.0783
#>
#>
mfx <- slopes(mod, newdata = "mean")
hypotheses(cmp, hypothesis = "b2 = 0.2")
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> b2=0.2 0.0929 0.108 0.858 0.391 1.4 -0.119 0.305
#>
#>
pre <- predictions(mod, newdata = datagrid(hp = 110, mpg = c(30, 35)))
hypotheses(pre, hypothesis = "b1 = b2")
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> b1=b2 -3.57e-05 2.84 -1.26e-05 1 0.0 -5.56 5.56
#>
#>
# The `hypothesis` argument can be used to compute standard errors for fitted values
mod <- glm(am ~ hp + mpg, data = mtcars, family = binomial)
f <- function(x) predict(x, type = "link", newdata = mtcars)
p <- hypotheses(mod, hypothesis = f)
head(p)
#>
#> Term Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> 1 -1.098 0.716 -1.534 0.125 3.0 -2.50 0.305
#> 2 -1.098 0.716 -1.534 0.125 3.0 -2.50 0.305
#> 3 0.233 0.781 0.299 0.765 0.4 -1.30 1.764
#> 4 -0.595 0.647 -0.919 0.358 1.5 -1.86 0.674
#> 5 -0.418 0.647 -0.645 0.519 0.9 -1.69 0.851
#> 6 -5.026 2.195 -2.290 0.022 5.5 -9.33 -0.725
#>
#>
f <- function(x) predict(x, type = "response", newdata = mtcars)
p <- hypotheses(mod, hypothesis = f)
head(p)
#>
#> Term Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> 1 0.25005 0.1343 1.862 0.06257 4.0 -0.0131 0.5132
#> 2 0.25005 0.1343 1.862 0.06257 4.0 -0.0131 0.5132
#> 3 0.55803 0.1926 2.898 0.00376 8.1 0.1806 0.9355
#> 4 0.35560 0.1483 2.398 0.01648 5.9 0.0650 0.6462
#> 5 0.39710 0.1550 2.562 0.01041 6.6 0.0933 0.7009
#> 6 0.00652 0.0142 0.459 0.64653 0.6 -0.0213 0.0344
#>
#>
# Complex aggregation
# Step 1: Collapse predicted probabilities by outcome level, for each individual
# Step 2: Take the mean of the collapsed probabilities by group and `cyl`
library(dplyr)
#>
#> Attaching package: ‘dplyr’
#> The following objects are masked from ‘package:stats’:
#>
#> filter, lag
#> The following objects are masked from ‘package:base’:
#>
#> intersect, setdiff, setequal, union
library(MASS)
#>
#> Attaching package: ‘MASS’
#> The following object is masked from ‘package:dplyr’:
#>
#> select
library(dplyr)
library(magrittr)
dat <- transform(mtcars, gear = factor(gear))
mod <- polr(gear ~ factor(cyl) + hp, dat)
aggregation_fun <- function(x) {
predictions(x, vcov = FALSE) %>%
mutate(group = ifelse(group %in% c("3", "4"), "3 & 4", "5")) %>%
summarize(estimate = sum(estimate), .by = c("rowid", "cyl", "group")) %>%
summarize(estimate = mean(estimate), .by = c("cyl", "group")) %>%
rename(term = cyl)
}
hypotheses(mod, hypothesis = aggregation_fun)
#>
#> Re-fitting to get Hessian
#>
#> Re-fitting to get Hessian
#> Warning: Unable to extract a variance-covariance matrix from this model.
#>
#> Term Group Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> 6 3 & 4 0.8390 NA NA NA NA NA NA
#> 4 3 & 4 0.7197 NA NA NA NA NA NA
#> 8 3 & 4 0.9283 NA NA NA NA NA NA
#> 6 5 0.1610 NA NA NA NA NA NA
#> 4 5 0.2803 NA NA NA NA NA NA
#> 8 5 0.0717 NA NA NA NA NA NA
#>
#>
# Equivalence, non-inferiority, and non-superiority tests
mod <- lm(mpg ~ hp + factor(gear), data = mtcars)
p <- predictions(mod, newdata = "median")
hypotheses(p, equivalence = c(17, 18))
#>
#> Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 % p (NonSup) p (Equiv)
#> 19.7 1 19.6 <0.001 281.3 17.7 21.6 0.951 0.951
#>
#> Term: b1
#>
mfx <- avg_slopes(mod, variables = "hp")
hypotheses(mfx, equivalence = c(-.1, .1))
#>
#> Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 % p (NonSup) p (Equiv)
#> -0.0669 0.011 -6.05 <0.001 29.4 -0.0885 -0.0452 <0.001 0.00135
#>
#> Term: hp
#>
cmp <- avg_comparisons(mod, variables = "gear", hypothesis = ~pairwise)
#> Warning: The `gear` variable is treated as a categorical (factor) variable, but the original data is of class numeric. It is safer and faster to convert such variables to factor before fitting the model and calling a `marginaleffects` function. This warning appears once per session.
hypotheses(cmp, equivalence = c(0, 10))
#>
#> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> (5 - 3) - (4 - 3) 3.94 2.05 1.92 0.0543 4.2 -0.0727 7.95
#> p (NonSup) p (Equiv)
#> 0.00154 0.0271
#>
#> Term: b1
#>
# joint hypotheses: character vector
model <- lm(mpg ~ as.factor(cyl) * hp, data = mtcars)
hypotheses(model, joint = c("as.factor(cyl)6:hp", "as.factor(cyl)8:hp"))
#>
#>
#> Joint hypothesis test:
#> as.factor(cyl)6:hp = 0
#> as.factor(cyl)8:hp = 0
#>
#> F Pr(>|F|) Df 1 Df 2
#> 2.11 0.142 2 26
#>
#>
# joint hypotheses: regular expression
hypotheses(model, joint = "cyl")
#>
#>
#> Joint hypothesis test:
#> as.factor(cyl)6 = 0
#> as.factor(cyl)8 = 0
#> as.factor(cyl)6:hp = 0
#> as.factor(cyl)8:hp = 0
#>
#> F Pr(>|F|) Df 1 Df 2
#> 5.7 0.00197 4 26
#>
#>
# joint hypotheses: integer indices
hypotheses(model, joint = 2:3)
#>
#>
#> Joint hypothesis test:
#> as.factor(cyl)6 = 0
#> as.factor(cyl)8 = 0
#>
#> F Pr(>|F|) Df 1 Df 2
#> 6.12 0.00665 2 26
#>
#>
# joint hypotheses: different null hypotheses
hypotheses(model, joint = 2:3, hypothesis = 1)
#>
#>
#> Joint hypothesis test:
#> as.factor(cyl)6 = 1
#> as.factor(cyl)8 = 1
#>
#> F Pr(>|F|) Df 1 Df 2
#> 6.84 0.00411 2 26
#>
#>
hypotheses(model, joint = 2:3, hypothesis = 1:2)
#>
#>
#> Joint hypothesis test:
#> as.factor(cyl)6 = 1
#> as.factor(cyl)8 = 2
#>
#> F Pr(>|F|) Df 1 Df 2
#> 7.47 0.00273 2 26
#>
#>
# joint hypotheses: marginaleffects object
cmp <- avg_comparisons(model)
hypotheses(cmp, joint = "cyl")
#>
#>
#> Joint hypothesis test:
#> cyl 6 - 4 = 0
#> cyl 8 - 4 = 0
#>
#> F Pr(>|F|) Df 1 Df 2
#> 1.6 0.221 2 26
#>
#> Type: response
#>
# Multiple comparison adjustment
# p values and family-wise confidence intervals
cmp <- avg_comparisons(model)
hypotheses(cmp, multcomp = "hochberg")
#>
#> Term Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> cyl 0.1169 3.5037 0.0334 0.9734 0.0 -7.8974 8.131105
#> cyl -3.4496 3.3371 -1.0337 0.6025 0.7 -11.0826 4.183433
#> hp -0.0467 0.0206 -2.2674 0.0701 3.8 -0.0937 0.000409
#>
#>