Seemingly Unrelated Regressions Family Function
SURff.RdFits a system of seemingly unrelated regressions.
Usage
SURff(mle.normal = FALSE,
divisor = c("n", "n-max(pj,pk)", "sqrt((n-pj)*(n-pk))"),
parallel = FALSE, Varcov = NULL, matrix.arg = FALSE)Arguments
- mle.normal
Logical. If
TRUEthen the MLE, assuming multivariate normal errors, is computed; the effect is just to add aloglikelihoodslot to the returned object. Then it results in the maximum likelihood estimator.- divisor
Character, partial matching allowed and the first choice is the default. The divisor for the estimate of the covariances. If
"n"then the estimate will be biased. If the others then the estimate will be unbiased for some elements. Ifmle.normal = TRUEand this argument is not"n"then a warning or an error will result.- parallel
See
CommonVGAMffArguments. Ifparallel = TRUEthen the constraint applies to the intercept too.- Varcov
Numeric. This may be assigned a variance-covariance of the errors. If
matrix.argthen this is a \(M \times M\) matrix. If!matrix.argthen this is a \(M \times M\) matrix in matrix-band format (a vector with at least \(M\) and at mostM*(M+1)/2elements).- matrix.arg
Logical. Of single length.
Details
Proposed by Zellner (1962), the basic seemingly unrelated regressions (SUR) model is a set of LMs (\(M > 1\) of them) tied together at the error term level. Each LM's model matrix may potentially have its own set of predictor variables.
Zellner's efficient (ZEF) estimator (also known as
Zellner's two-stage Aitken estimator)
can be obtained by setting
maxit = 1
(and possibly divisor = "sqrt" or
divisor = "n-max").
The default value of maxit (in vglm.control)
probably means iterative GLS (IGLS) estimator is computed because
IRLS will probably iterate to convergence.
IGLS means, at each iteration, the residuals are used to estimate
the error variance-covariance matrix, and then the matrix is used
in the GLS.
The IGLS estimator is also known
as Zellner's iterative Aitken estimator, or IZEF.
Value
An object of class "vglmff" (see vglmff-class).
The object is used by modelling functions such as
vglm and vgam.
References
Zellner, A. (1962). An Efficient Method of Estimating Seemingly Unrelated Regressions and Tests for Aggregation Bias. J. Amer. Statist. Assoc., 57(298), 348–368.
Kmenta, J. and Gilbert, R. F. (1968). Small Sample Properties of Alternative Estimators of Seemingly Unrelated Regressions. J. Amer. Statist. Assoc., 63(324), 1180–1200.
Warning
The default convergence criterion may be a little loose.
Try setting epsilon = 1e-11, especially
with mle.normal = TRUE.
Note
The fitted object has slot @extra$ncols.X.lm which is
a \(M\) vector with the number of parameters for each LM.
Also, @misc$values.divisor is the \(M\)-vector of
divisor values.
Constraint matrices are needed in order to specify which response
variables that each term on the RHS of the formula is a
regressor for.
See the constraints argument of vglm
for more information.
Examples
# Obtain some of the results of p.1199 of Kmenta and Gilbert (1968)
clist <- list("(Intercept)" = diag(2),
"capital.g" = rbind(1, 0),
"value.g" = rbind(1, 0),
"capital.w" = rbind(0, 1),
"value.w" = rbind(0, 1))
zef1 <- vglm(cbind(invest.g, invest.w) ~
capital.g + value.g + capital.w + value.w,
SURff(divisor = "sqrt"), maxit = 1,
data = gew, trace = TRUE, constraints = clist)
#> Iteration 1: coefficients =
#> -27.719317124, -1.251988228, 0.139036274, 0.038310207, 0.063978067,
#> 0.057629796
round(coef(zef1, matrix = TRUE), digits = 4) # ZEF
#> invest.g invest.w
#> (Intercept) -27.7193 -1.2520
#> capital.g 0.1390 0.0000
#> value.g 0.0383 0.0000
#> capital.w 0.0000 0.0640
#> value.w 0.0000 0.0576
zef1@extra$ncols.X.lm
#> [1] 3 3
zef1@misc$divisor
#> [1] "sqrt((n-pj)*(n-pk))"
zef1@misc$values.divisor
#> [1] 17 17 17
round(sqrt(diag(vcov(zef1))), digits = 4) # SEs
#> (Intercept):1 (Intercept):2 capital.g value.g capital.w
#> 29.3212 7.5452 0.0250 0.0144 0.0530
#> value.w
#> 0.0145
nobs(zef1, type = "lm")
#> [1] 20
df.residual(zef1, type = "lm")
#> invest.g invest.w
#> 17 17
mle1 <- vglm(cbind(invest.g, invest.w) ~
capital.g + value.g + capital.w + value.w,
SURff(mle.normal = TRUE),
epsilon = 1e-11,
data = gew, trace = TRUE, constraints = clist)
#> Iteration 1: loglikelihood = -158.398407229
#> Iteration 2: loglikelihood = -158.306427301
#> Iteration 3: loglikelihood = -158.303257158
#> Iteration 4: loglikelihood = -158.303113124
#> Iteration 5: loglikelihood = -158.303106338
#> Iteration 6: loglikelihood = -158.303106016
#> Iteration 7: loglikelihood = -158.303106
#> Iteration 8: loglikelihood = -158.303106
#> Iteration 9: loglikelihood = -158.303106
round(coef(mle1, matrix = TRUE), digits = 4) # MLE
#> invest.g invest.w
#> (Intercept) -30.7484 -1.7016
#> capital.g 0.1359 0.0000
#> value.g 0.0405 0.0000
#> capital.w 0.0000 0.0557
#> value.w 0.0000 0.0594
round(sqrt(diag(vcov(mle1))), digits = 4) # SEs
#> (Intercept):1 (Intercept):2 capital.g value.g capital.w
#> 27.3459 6.9284 0.0235 0.0134 0.0488
#> value.w
#> 0.0133