# 1. Example 1: Classification for endometrial cancer

Endometrial <- read.table("Endometrial.dat", header = T)
Endometrial
• HG: histology grade (0: low / 1: high)
• NV: neovasculation
• PI: pulsatility index
• EH: endometrium height

## 1.1 Quasi-complete seperation

table(Endometrial$NV, Endometrial$HG)
##
##      0  1
##   0 49 17
##   1  0 13

If $$NV_i > 0$$, then $$y_i = 1$$. If $$NV_i = 0$$, then $$y_i$$ can be either $$0$$ or $$1$$.

We can see that R returns very large parameter estimation

fit <- glm(HG ~ NV + PI + EH, family = binomial, data = Endometrial)
summary(fit)
##
## Call:
## glm(formula = HG ~ NV + PI + EH, family = binomial, data = Endometrial)
##
## Deviance Residuals:
##      Min        1Q    Median        3Q       Max
## -1.50137  -0.64108  -0.29432   0.00016   2.72777
##
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)
## (Intercept)    4.30452    1.63730   2.629 0.008563 **
## NV            18.18556 1715.75089   0.011 0.991543
## PI            -0.04218    0.04433  -0.952 0.341333
## EH            -2.90261    0.84555  -3.433 0.000597 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 104.903  on 78  degrees of freedom
## Residual deviance:  55.393  on 75  degrees of freedom
## AIC: 63.393
##
## Number of Fisher Scoring iterations: 17

You can still run a likelihood-ratio statistics with df = 1 for $$H_0$$: $$\beta_1 = 0$$

deviance(glm(HG ~ PI + EH, family = binomial, data = Endometrial)) - deviance(fit)
##  9.357643

Compare with a Chi-square statistics, the p-value is 0.002. We can conclude that beta1 > 0. We will present other inferences in later sections.

We can also run the deviance analysis

fit1 <- glm(HG ~ PI + EH + NV, family = "binomial", data = Endometrial)
anova(fit1, test = "LRT")

## 1.2 ROC curve

Here we used all samples to train the logistic regression, and we now look at the training error with ROC curve.

Endometrial$pred <- predict(fit, Endometrial, type = "response") Endometrial library(ROCR) ## Loading required package: gplots ## ## Attaching package: 'gplots' ## The following object is masked from 'package:stats': ## ## lowess pred <- prediction(Endometrial$pred, Endometrial$HG) perf <- performance(pred,"sens","fpr") plot(perf) ## Compare with a simplier model fit2 <- glm(HG ~ PI + EH, family = "binomial", data = Endometrial) Endometrial$pred2 <- predict(fit2, Endometrial, type = "response")
pred2 <- prediction(Endometrial$pred2, Endometrial$HG)
perf2 <- performance(pred2,"sens","fpr")
plot(perf2, add = T, col = "red") We see a slighted larger error for the simplier model.

For an unbiased evaluation of the classification performance, we should look at the ROC curve on a test dataset. We can also randomly select $$20\%$$ samples as test datae.

n <- nrow(Endometrial)
set.seed(1)
test.idx <- sample(n, round(0.2*n))
fit.train <- glm(HG ~ PI + EH + NV, family = "binomial", data = Endometrial[-test.idx,])
test <- Endometrial[test.idx, ]
test$pred <- predict(fit.train, test, type = "response") pred <- prediction(test$pred, test$HG) perf <- performance(pred,"sens","fpr") plot(perf) ## Compare with a simplier model fit.train2 <- glm(HG ~ PI + EH, family = "binomial", data = Endometrial[-test.idx,]) test$pred2 <- predict(fit.train2, test, type = "response")
pred2 <- prediction(test$pred2, test$HG)
perf2 <- performance(pred2,"sens","fpr")
plot(perf2, add = T, col = "red") We see that the simpler model is better in terms of classification. Anyway, the sample size in this dataset is small, so the randomness in the test data ROC curve is non-negligiable.

# 2. Example 2: Dose-response study

The data reports the death of adult flour beetles after the exposure to gaseous carbon disulfide at various dosages. The data is in a group-level form.

beetles2 <- read.table("beetles2.dat", header = T)
beetles2

## 2.1 Group-level data V.S. ungrouped data

alive <- beetles2$n - beetles2$dead
data <- matrix(append(beetles2$dead, alive), ncol = 2) logdose <- beetles2$logdose
dead <- beetles2$dead n <- beetles2$n
fit.probit <- glm(data ~ logdose, family = binomial(link = probit))
summary(fit.probit)
##
## Call:
## glm(formula = data ~ logdose, family = binomial(link = probit))
##
## Deviance Residuals:
##     Min       1Q   Median       3Q      Max
## -1.5627  -0.4848   0.7647   1.0530   1.3149
##
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)
## (Intercept)  -34.956      2.649  -13.20   <2e-16 ***
## logdose       19.741      1.488   13.27   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 284.202  on 7  degrees of freedom
## Residual deviance:   9.987  on 6  degrees of freedom
## AIC: 40.185
##
## Number of Fisher Scoring iterations: 4

Residual deviance is $$9.99$$ (with p-value $$0.125$$ from the likelihood ratio test, after comparing with the group-level saturated model)

Now let’s check the ungrouped data

Beetles <- read.table("Beetles.dat", header = T)
Beetles
fit.probit2 <- glm(y ~ x, family = binomial(link = probit), data = Beetles)
summary(fit.probit2)
##
## Call:
## glm(formula = y ~ x, family = binomial(link = probit), data = Beetles)
##
## Deviance Residuals:
##     Min       1Q   Median       3Q      Max
## -2.5638  -0.6263   0.1597   0.4478   2.3883
##
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)
## (Intercept)  -34.956      2.649  -13.20   <2e-16 ***
## x             19.741      1.488   13.27   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 645.44  on 480  degrees of freedom
## Residual deviance: 371.23  on 479  degrees of freedom
## AIC: 375.23
##
## Number of Fisher Scoring iterations: 6

Residual deviance is $$371.23$$. The log-likelihood ratio test here for the residual deviance is invalid.