Skip to contents

Estimate the four parameters of the Freund (1961) bivariate extension of the exponential distribution by maximum likelihood estimation.

Usage

freund61(la = "loglink",  lap = "loglink",  lb = "loglink",
         lbp = "loglink", ia = NULL, iap = NULL, ib = NULL,
         ibp = NULL, independent = FALSE, zero = NULL)

Arguments

la, lap, lb, lbp

Link functions applied to the (positive) parameters \(\alpha\), \(\alpha'\), \(\beta\) and \(\beta'\), respectively (the “p” stands for “prime”). See Links for more choices.

ia, iap, ib, ibp

Initial value for the four parameters respectively. The default is to estimate them all internally.

independent

Logical. If TRUE then the parameters are constrained to satisfy \(\alpha=\alpha'\) and \(\beta=\beta'\), which implies that \(y_1\) and \(y_2\) are independent and each have an ordinary exponential distribution.

zero

A vector specifying which linear/additive predictors are modelled as intercepts only. The values can be from the set {1,2,3,4}. The default is none of them. See CommonVGAMffArguments for more information.

Details

This model represents one type of bivariate extension of the exponential distribution that is applicable to certain problems, in particular, to two-component systems which can function if one of the components has failed. For example, engine failures in two-engine planes, paired organs such as peoples' eyes, ears and kidneys. Suppose \(y_1\) and \(y_2\) are random variables representing the lifetimes of two components \(A\) and \(B\) in a two component system. The dependence between \(y_1\) and \(y_2\) is essentially such that the failure of the \(B\) component changes the parameter of the exponential life distribution of the \(A\) component from \(\alpha\) to \(\alpha'\), while the failure of the \(A\) component changes the parameter of the exponential life distribution of the \(B\) component from \(\beta\) to \(\beta'\).

The joint probability density function is given by $$f(y_1,y_2) = \alpha \beta' \exp(-\beta' y_2 - (\alpha+\beta-\beta')y_1) $$ for \(0 < y_1 < y_2\), and $$f(y_1,y_2) = \beta \alpha' \exp(-\alpha' y_1 - (\alpha+\beta-\alpha')y_2) $$ for \(0 < y_2 < y_1\). Here, all four parameters are positive, as well as the responses \(y_1\) and \(y_2\). Under this model, the probability that component \(A\) is the first to fail is \(\alpha/(\alpha+\beta)\). The time to the first failure is distributed as an exponential distribution with rate \(\alpha+\beta\). Furthermore, the distribution of the time from first failure to failure of the other component is a mixture of Exponential(\(\alpha'\)) and Exponential(\(\beta'\)) with proportions \(\beta/(\alpha+\beta)\) and \(\alpha/(\alpha+\beta)\) respectively.

The marginal distributions are, in general, not exponential. By default, the linear/additive predictors are \(\eta_1=\log(\alpha)\), \(\eta_2=\log(\alpha')\), \(\eta_3=\log(\beta)\), \(\eta_4=\log(\beta')\).

A special case is when \(\alpha=\alpha'\) and \(\beta=\beta'\), which means that \(y_1\) and \(y_2\) are independent, and both have an ordinary exponential distribution with means \(1 / \alpha\) and \(1 / \beta\) respectively.

Fisher scoring is used, and the initial values correspond to the MLEs of an intercept model. Consequently, convergence may take only one iteration.

Value

An object of class "vglmff" (see vglmff-class). The object is used by modelling functions such as vglm and vgam.

References

Freund, J. E. (1961). A bivariate extension of the exponential distribution. Journal of the American Statistical Association, 56, 971–977.

Author

T. W. Yee

Note

To estimate all four parameters, it is necessary to have some data where \(y_1<y_2\) and \(y_2<y_1\).

The response must be a two-column matrix, with columns \(y_1\) and \(y_2\). Currently, the fitted value is a matrix with two columns; the first column has values \((\alpha'+\beta)/(\alpha' (\alpha+\beta))\) for the mean of \(y_1\), while the second column has values \((\beta'+\alpha)/(\beta' (\alpha+\beta))\) for the mean of \(y_2\). The variance of \(y_1\) is $$ \frac{(\alpha')^2 + 2 \alpha \beta + \beta^2}{ (\alpha')^2 (\alpha + \beta)^2}, $$ the variance of \(y_2\) is $$ \frac{(\beta')^2 + 2 \alpha \beta + \alpha^2 }{ (\beta')^2 (\alpha + \beta)^2 }, $$ the covariance of \(y_1\) and \(y_2\) is $$ \frac{\alpha' \beta' - \alpha \beta }{ \alpha' \beta' (\alpha + \beta)^2}. $$

See also

Examples

fdata <- data.frame(y1 = rexp(nn <- 1000, rate = exp(1)))
fdata <- transform(fdata, y2 = rexp(nn, rate = exp(2)))
fit1 <- vglm(cbind(y1, y2) ~ 1, freund61, fdata, trace = TRUE)
#> Iteration 1: loglikelihood = 994.44856
coef(fit1, matrix = TRUE)
#>             loglink(a) loglink(ap) loglink(b) loglink(bp)
#> (Intercept)  0.9594723    0.921811   2.079533    2.013841
Coef(fit1)
#>        a       ap        b       bp 
#> 2.610319 2.513839 8.000733 7.492039 
vcov(fit1)
#>               (Intercept):1 (Intercept):2 (Intercept):3 (Intercept):4
#> (Intercept):1   0.004065041    0.00000000    0.00000000   0.000000000
#> (Intercept):2   0.000000000    0.00132626    0.00000000   0.000000000
#> (Intercept):3   0.000000000    0.00000000    0.00132626   0.000000000
#> (Intercept):4   0.000000000    0.00000000    0.00000000   0.004065041
head(fitted(fit1))
#>         y1        y2
#> 1 0.394181 0.1270762
#> 2 0.394181 0.1270762
#> 3 0.394181 0.1270762
#> 4 0.394181 0.1270762
#> 5 0.394181 0.1270762
#> 6 0.394181 0.1270762
summary(fit1)
#> 
#> Call:
#> vglm(formula = cbind(y1, y2) ~ 1, family = freund61, data = fdata, 
#>     trace = TRUE)
#> 
#> Coefficients: 
#>               Estimate Std. Error z value Pr(>|z|)    
#> (Intercept):1  0.95947    0.06376   15.05   <2e-16 ***
#> (Intercept):2  0.92181    0.03642   25.31   <2e-16 ***
#> (Intercept):3  2.07953    0.03642   57.10   <2e-16 ***
#> (Intercept):4  2.01384    0.06376   31.59   <2e-16 ***
#> ---
#> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> 
#> Names of linear predictors: loglink(a), loglink(ap), loglink(b), loglink(bp)
#> 
#> Log-likelihood: 994.4486 on 3996 degrees of freedom
#> 
#> Number of Fisher scoring iterations: 1 
#> 

# y1 and y2 are independent, so fit an independence model
fit2 <- vglm(cbind(y1, y2) ~ 1, freund61(indep = TRUE),
             data = fdata, trace = TRUE)
#> Iteration 1: loglikelihood = 993.91311
#> Iteration 2: loglikelihood = 993.9132
#> Iteration 3: loglikelihood = 993.9132
coef(fit2, matrix = TRUE)
#>             loglink(a) loglink(ap) loglink(b) loglink(bp)
#> (Intercept)   0.930945    0.930945   2.062968    2.062968
constraints(fit2)
#> $`(Intercept)`
#>      [,1] [,2]
#> [1,]    1    0
#> [2,]    1    0
#> [3,]    0    1
#> [4,]    0    1
#> 
pchisq(2 * (logLik(fit1) - logLik(fit2)),  # p-value
       df = df.residual(fit2) - df.residual(fit1),
       lower.tail = FALSE)
#> [1] 0.5854582
lrtest(fit1, fit2)  # Better alternative
#> Likelihood ratio test
#> 
#> Model 1: cbind(y1, y2) ~ 1
#> Model 2: cbind(y1, y2) ~ 1
#>    #Df LogLik Df  Chisq Pr(>Chisq)
#> 1 3996 994.45                     
#> 2 3998 993.91  2 1.0707     0.5855