*204*
*26*
*3MB*

*English*
*Pages [113]*
*Year 2013*

- Author / Uploaded
- Olga Korosteleva

Solutions Manual to NONPARAMETRIC METHODS IN STATISTICS: SAS APPLICATIONS OLGA KOROSTELEVA Department of Mathematics and Statistics California State University, Long Beach

Chapter 1 Exercise 1.1 The sign of the difference between the two-week measurement and the starting one for each participant is recorded in the last column of the following table. Start 22.125 23.500 23.500 25.875 26.375 21.375 28.875 24.625 23.125 25.000

Two Weeks 23.375 23.125 24.750 25.750 26.750 22.625 29.000 24.000 24.125 27.250

Sign of Difference + + + + + + +

The test statistic is the number of plus signs, M = 7. The total number of trials is n = 10. The alternative hypothesis is one-sided, H1 : p > 0.5, since more pluses than minuses are expected, if the food supplement is effective. The P-value for the sign test is P − value = (0.5)

10

10 X 10 k=7

k

10

= (0.5)

h10 7

i 10 10 10 + + + 8 9 10

= (0.5)10 120 + 45 + 10 + 1 = 0.1719. The P-value exceeds 0.05 with the implication that the null hypothesis should not be rejected. The conclusion is that there is no evidence in the data that the muscle building food supplement is effective. In SAS, we run the code: data muscle_building; input start two_weeks; diff=two_weeks-start; cards; 22.125 23.375 23.500 23.125 23.500 24.750 2

25.875 26.375 21.375 28.875 24.625 23.125 25.000 ;

25.750 26.750 22.625 29.000 24.000 24.125 27.250

proc univariate data=muscle_building; var diff; run; The output for the sign test is

Test Sign

-StatisticM 2

-----p Value-----Pr >= |M| 0.3438

Indeed, the test statistic computed by SAS is M −n/2 = 7−10/5 = 7−5 = 2, and the given P-value is two-sided. The one-sided P-value is derived as 0.3438/2 = 0.1719.

Exercise 1.2 The last column in the chart below contains the sign of the difference between the intervention and control group measurements. Pair Intervention Number Group 1 36 2 22 3 10 4 12 5 28 6 12 7 23 8 6

Control Group 17 15 -8 -11 14 20 24 6

Sign of Difference + + + + + undefined

There is one tie in this data set, hence only 7 signs are used for the analysis. Logically, if the intervention is effective, then there would be higher scores recorded for the intervention group. Therefore, we test the one-tailed alternative H1 : p > 0.5. The sign test statistic is M = 5, and the P-value is 3

calculated as follows: 7

P − value = (0.5)

7 X 7 k=5

k

= (0.5)

7

h7

i 7 7 + + 5 6 7

= (0.5)7 21 + 7 + 1 = 0.2266 > 0.05. At the 5% significance level, the null hypothesis is not rejected, leading to the conclusion that the new therapy is not effective. We run the following lines of code to verify this conclusion in SAS:

data mental_health; input intervention control; diff=intervention-control; cards; 36 17 22 15 10 -8 12 -11 28 14 12 20 23 24 6 6 ; proc univariate data=mental_health; var diff; run; The relevant output is

Test Sign

-StatisticM 1.5

-----p Value-----Pr >= |M| 0.4531

This is in sync with our calculations because M − n/2 = 5 − 7/2 = 5 − 3.5 = 1.5, and the one-sided P-value= 0.4531/2 = 0.22655 or 0.2266.

Exercise 1.3 The signs for the differences between week 1 and week 2 stock prices are as follows: 4

Day of the Week Mon Tue Wed Thu Fri

Stock Week 1 405.65 400.51 408.25 401.34 409.09

Price Week 2 403.02 399.49 396.10 403.59 405.68

Sign of Difference + + + +

The alternative is two-sided, H1 : p 6= 0.5, because no assumption is made on the direction of the shift in the distribution location parameter. The number of plus signs is M = 4, the total number of trials is n = 5, hence M ≥ n/2. Therefore, 4

P − value = (0.5)

5 X 5 k=4

k

4

= (0.5)

h5

i 5 = (0.5)4 5 + 1 = 0.375. + 4 5

Since P-value > 0.05, there is not enough evidence to conclude that there is a shift in weekly distribution of stock prices. Running the following code in SAS produces the same result: data stock_prices; input week1 week2; diff=week1-week2; cards; 405.65 403.02 400.51 399.49 408.25 396.10 401.34 403.59 409.09 405.68 ; proc univariate data=stock_prices; var diff; run; The output is Test Sign

-StatisticM 1.5

-----p Value-----Pr >= |M| 0.3750 5

As it should be M − n/2 = 4 − 5/2 = 4 − 2.5 = 1.5, and the two-sided P-value=0.3750.

Exercise 1.4 The differences between the two-week measurement and the starting one for each participant and their ranks are presented in the following table: Start Two Weeks 22.125 23.375 23.500 23.125 23.500 24.750 25.875 25.750 26.375 26.750 21.375 22.625 28.875 29.000 24.625 24.000 23.125 24.125 25.000 27.250

Difference 1.25 -0.375 1.25 -0.125 0.375 1.25 0.125 -0.625 1 2.25

Rank 8 3.5 8 1.5 3.5 8 1.5 5 6 10

Note that the two absolute differences of 0.125 are tied for ranks 1 and 2, thus each of them receives rank (1+2)/2=1.5; the two absolute differences of 0.375 are tied for ranks 3 and 4, receiving rank 3.5 each; and the three differences of 1.25 are tied for ranks 7, 8, and 9, receiving rank (7+8+9)/3=8 each. The test statistic is the sum of the ranks for negative differences T − = 3.5 + 1.5 + 5 = 10. From Table 1, the critical value for n = 10 at one-tailed α = 0.05 is T0 = 10, thus, the alternative hypothesis H1 : θstart < θtwo weeks should be accepted (the null rejected). The conclusion is that the muscle building food supplement is effective. This conclusion runs counter to the one obtained in Exercise 1.1. For α = 0.01, T0 = 5, thus the null is not rejected at the 1%-level. To get the signed-rank statistic and the corresponding P-value in SAS, the same code as in Exercise 1.1 is run. The result is

Test Signed Rank

-StatisticS 17.5

-----p Value-----Pr >= |S| 0.0820

The given test statistic equals S = n(n + 1)/4 − T − = 10(10 + 1)/4 − 10 = 27.5 − 10 = 17.5. The one-sided P-value is 0.0820/2=0.0410, which supports 6

our conclusion.

Exercise 1.5 The differences between the intervention and control group measurements and their ranks are Pair Intervention Number Group 1 36 2 22 3 10 4 12 5 28 6 12 7 23 8 6

Control Difference Group 17 19 15 7 -8 18 -11 23 14 14 20 -8 24 -1 6 0

Rank 6 2 5 7 4 3 1 N/A

We are testing H0 : θinterv = θcontrol against H1 : θinterv > θcontrol . One difference of zero is discarded, leaving the valid sample size of n = 7. The test statistic for a one-sided alternative hypothesis is T − = 3 + 1 = 4. From Table 1, the critical value for the 0.05 significance level is T0 = 3. Since T > T0 , the null hypothesis should not be rejected. The conclusion is that the new therapy is not effective. Note that this conclusion coincides with the one drawn from the sign test in Exercise 1.2. To see whether SAS is supportive of our conclusion, we run the same code as in Exercise 1.2. The output is

Test Signed Rank

-StatisticS 10

-----p Value-----Pr >= |S| 0.1094

The given test statistic is computed as S = n(n+1)/4−T − = 7(7+1)/4−4 = 14 − 4 = 10. The one-sided P-value= 0.1094/2 = 0.0547 > 0.05, which upholds our conclusion.

Exercise 1.6 The differences between the stock prices at week 1 and week 2 and their ranks are as follows: 7

Day of the Week Mon Tue Wed Thu Fri

Stock Week 1 405.65 400.51 408.25 401.34 409.09

Price Week 2 403.02 399.49 396.10 403.59 405.68

Difference 2.63 1.02 12.15 -2.25 3.41

Rank 3 1 5 2 4

We would like to test the two-tailed alternative H1 : θweek1 6= θweek2 . The test statistic is T = min(T + , T − ) = T − = 2. The critical value for a twotailed test and n = 5 cannot be found in Table 1. It should be understood as whatever the value of the test statistic, the corresponding P-value is always in excess of 0.05, hence H0 can never be rejected. Thus, a meaningful conclusion can actually be drawn without doing any calculations. To check that the SAS output is in line with the computed test statistic T = 2, we run the code identical to the one given in Exercise 1.3 and arrive at Test Signed Rank

-StatisticS 5.5

-----p Value-----Pr >= |S| 0.1875

Indeed, the test statistic is S = n(n + 1)/4 − T − = 5(5 + 1)/4 − 2 = 7.5 − 2 = 5.5, and the two-tailed P-value of 0.1875 supports our conclusion. Note that even though a meaningful conclusion may be drawn from this testing procedure, the sample sizes are too small, and thus, regardless of the observed data, the null hypothesis is impossible to reject. This means that the test has the power of 0% and should not be used in practice.

Exercise 1.7 The data and their ranks are Tx group 5 4 6 4 3 4 4 3 5 5

Rank 8 4.5 10 4.5 1.5 4.5 4.5 1.5 8 8

Cx group 7 8 12 10 8 9 10

8

Rank 11 12.5 17 15.5 12.5 14 15.5

The tested hypotheses are H0 : θT x = θCx and H1 : θT x < θCx . The test statistic is the sum of the ranks for the observations in the control group since it is a smaller group. The value of the test statistic is W = 98. Thus, the test statistic must be compared to the upper critical value found in Table 2, which for n1 = 7, n2 = 10, and α = 0.05 is equal to WU = 81. Hence, the null hypothesis must be rejected and the conclusion drawn that the minimally invasive arthroplasty shortens the length of hospital stay. To do the analysis in SAS, the following code was run: data arthroplasty; input group $ HLOS @@; /*HLOS=hospital length of stay*/ cards; Tx 5 Cx 7 Tx 4 Cx 8 Tx 6 Cx 12 Tx 4 Cx 10 Tx 3 Cx 8 Tx 4 Cx 9 Tx 4 Cx 10 Tx 3 Tx 5 Tx 5 ; proc npar1way data=arthroplasty wilcoxon; class group; var HLOS; exact; run; The output is Wilcoxon Two-Sample Test Statistic (S) 98.0000 Exact Test One-Sided Pr >= S 5.142E-05 The P-value is very small, implying that the null hypothesis should be rejected, signifying efficiency of the minimally invasive surgical procedure.

9

Exercise 1.8 The data are assigned the ranks as follows: Adolescents 15 Ranks 12.5 Adults Ranks

13 10.5

8 4

8 4

10 7.5

6 1

9 6

7 2

8 4

17 15.5

10 7.5

12 9

13 10.5

17 15.5

15 12.5

17 15.5

17 15.5

19 18

The alternative hypothesis is lower-tailed, H1 : θadolesc < θadults . The test statistic is equal to the sum of the ranks for adolescents (the smaller sample), W = 41. From Table 2, the lower critical value for n1 = 8, n2 = 10, and α = 0.01 is WL = 49. Since the test statistic is below the lower critical value, we reject the null and conclude that adults show a better knowledge regarding HIV. In SAS, we run the code: data HIV_knowledge; input group $ score @@; cards; Adolescent 15 Adult 13 Adolescent 8 Adult 17 Adolescent 8 Adult 10 Adolescent 10 Adult 12 Adolescent 6 Adult 13 Adolescent 9 Adult 17 Adolescent 7 Adult 15 Adolescent 8 Adult 17 Adult 17 Adult 19 ; proc npar1way data=HIV_knowledge wilcoxon; class group; var score; exact; run; The output below contains the test statistic and the one-tailed P-value. Wilcoxon Two-Sample Test Statistic (S) 41.0000 10

Exact Test One-Sided Pr = |S - Mean| 0.0089 The P-value is less than 0.01, thus we reject the null and draw identical conclusion to the one above.

Exercise 1.10 (a) The ranks for the Wilcoxon rank-sum test are as follows: Working moms Ranks

1 2

Stay-at-home moms 1 Ranks 2

4 7

14 16

7 12

11 15

1 2

3 4.5

5 10

5 10

4 7

3 4.5

8 10 13 14 4 7

5 10

The test statistic is the sum of the ranks in the first sample (working moms), W = 81. From Table 3, the critical values for a 0.05 level two-tailed alternative H1 : θwork 6= θhome when n1 = n2 = 8 are WL = 49 and WU = 87. The test statistic falls in-between these two values, indicating that there is no difference in the location parameters of the underlying distributions in the two groups. In SAS, we run the code: data hours_exercising; input moms $ hours @@; cards; work 1 home 1 work 4 home 3 work 14 home 5 work 7 home 5 work 11 home 4 work 1 home 3 work 8 home 4 work 10 home 5 ; 12

proc npar1way data=hours_exercising wilcoxon; class moms; var hours; exact; run; The relevant SAS output is Wilcoxon Two-Sample Test Statistic (S) 81.0000 Exact Test Two-Sided Pr >= |S - Mean| 0.1826 The P-value is larger than 0.05, supporting our conclusion of no difference in location parameters. (b) The ranks for the Ansari-Bradley test are: Working moms Ranks

1 2

Stay-at-home moms 1 Ranks 2

4 7

14 1

7 11 5 2

3 4.5

5 7

5 7

4 7

1 2

8 4

10 3

3 4 4.5 7

5 7

We test a two-sided alternative H1 : γ 6= 1. The test statistic is the sum of the ranks in the first sample (working moms), C = 26. The critical values for a 0.05 level two-tailed test when n1 = n2 = 8 are CL = 27 and CU = 46. Since the observed test statistic is below the lower critical value, the null must be rejected and the conclusion drawn that the dispersion differs by group. In SAS, the following lines are added to the code in part (a): proc npar1way data=hours_exercising ab; class moms; var hours; exact; run; The output is

13

Ansari-Bradley Two-Sample Test Statistic (S) 26.0000 Exact Test Two-Sided Pr >= |S - Mean| 0.0329 Since the P-value is less than 0.05, we reject H0 at the 5% significance level. (c) From part (b), the test statistic is C = 26. The alternative hypothesis is lower-tailed, H1 : γ < 1. In Table 3, the lower critical value corresponding to n1 = n2 = 8 and α = 0.05 is CL = 28. Therefore, at the 5% level of significance, we reject the null hypothesis and conclude that the dispersion of the number of hours spent exercising last week is larger for working moms than for stay-at-home moms. At the 1% significance level, however, we cannot draw the same conclusion, since for α = 0.01, CL = 25. In this case we fail to reject the null and conclude that dispersions don’t differ. To verify this result in SAS, we look at the one-sided P-value:

One-Sided Pr 1. The 14

ranks are assigned as follows: Normalized UCLA Score Ranks

27 1

37 3

40 63 4 8.5

31 2

81 63 3 8.5

Constant Shoulder Score Ranks

56 6

78 4

60 8

67 6

68 5

55 5

57 7

90 2

94 1

64 7

The test statistic is C = 41. The upper critical value that corresponds to n1 = 7, n2 = 10, and α = 0.05 is CU = 43. Since C is below the critical value, the null hypothesis is not rejected. This refutes the surgeon’s claim that the Constant shoulder scale calculates scores with a better precision. In SAS, we first perform the Wilcoxon rank-sum test. The code is

data shoulder_score; input scale $ score @@; cards; UCLA 27 Constant 56 UCLA 37 Constant 78 UCLA 40 Constant 60 UCLA 63 Constant 55 UCLA 31 Constant 67 UCLA 81 Constant 68 UCLA 63 Constant 64 UCLA 57 UCLA 90 UCLA 94 ; proc npar1way data=shoulder_score wilcoxon; class scale; var score; exact; run; The output favors the null hypothesis since the two-sided P-value is large. The output is

Wilcoxon Two-Sample Test Statistic (S) 69.0000 Exact Test Two-Sided Pr >= |S - Mean| 0.5839 15

The use of the Ansari-Bradley test is validated now and additional lines of code can be run:

proc npar1way data=shoulder_score ab; class scale; var score; exact; run; The output is

Ansari-Bradley Two-Sample Test Statistic (S) 41.0000 Exact Test One-Sided Pr >= S 0.0765 The P-value in excess of 0.05 supports the conclusion drawn above. Exercise 1.12 The efficacy of the sealant is tantamount to the distribution function for the treated patients being above that for the non-treated patients at least for some values (for example, small values are more likely to be observed in the treatment group). Thus, we will be testing H1 : FT x (t) > FCx (t) for some t. The first step in applying the Kolmogorov-Smirnov test is to put all the observations in increasing order and compute the values of the empirical distribution functions FˆT x and FˆCx at each point. The calculations are summarized in the table below. The last column contains the difference FˆT x − FˆCx . The maximum of the values in the last column is taken as the test statistic. Observation 1.6 2.1 2.8 3.2 3.7 3.8 4.0 5.3 7.2 8.6

FˆT x 1/5 = 0.2 2/5 = 0.4 2/5 = 0.4 3/5 = 0.6 3/5 = 0.6 4/5 = 0.8 1 1 1 1 16

FˆCx FˆT x − FˆCx 0 0.2 0 0.4 1/5 = 0.2 0.2 1/5 = 0.2 0.4 2/5 = 0.4 0.2 2/5 = 0.4 0.4 2/5 = 0.4 0.6 3/5 = 0.6 0.4 4/5 = 0.8 0.2 1 0

The test statistic D+ = max(FˆX − FˆY ) = 0.6 is attained at the observation 4.0. From Table 4, the critical value for n1 = n2 = 5 and α = 0.05 is 3/5 = 0.6. The observed D-statistic is equal to the critical value, indicating that the data favor the null hypothesis. We conclude that the sealant is not effective at the 5% significance level. To support this conclusion, we run a SAS code

data adhesions; input group $ score @@; cards; Tx 2.1 Cx 3.7 Tx 1.6 Cx 7.2 Tx 3.8 Cx 2.8 Tx 3.2 Cx 5.3 Tx 4.0 Cx 8.6 ; proc npar1way data=adhesions; class group; var score; exact; run; The output pertained to the upper-tailed Kolmogorov-Smirnov test is

Kolmogorov-Smirnov Two-Sample Test D+ = max (F1 - F2) 0.6000 Exact Pr >= D+ 0.1786 The P-value is above 0.05 supporting our conclusion.

Exercise 1.13 We will be testing H1 : FT x (t) < FCx (t) for some t, because, in the event that the chemotherapy is effective, the reduction in tumor diameter should be larger for the treatment group patients than for control patients. The empirical distribution functions and their difference are computed in the table provided below. 17

Observation -3.4 -1.6 -1.1 -0.9 0.3 1.1 1.5 1.6 1.9 2.0 2.1 2.3 2.4 2.5 3.4 4.2

FˆT x 0 0 0 0 0 1/7 1/7 1/7 2/7 2/7 3/7 3/7 4/7 5/7 6/7 1

FˆCx 1/9 2/9 3/9 4/9 5/9 5/9 6/9 7/9 7/9 8/9 8/9 1 1 1 1 1

FˆCx − FˆT x 1/9 2/9 3/9 4/9 5/9 26/63 33/63 40/63 31/63 38/63 29/63 4/7 3/7 2/7 1/7 0

The Kolmogorov-Smirnov lower-tailed test statistic is D− = max(FˆCx − FˆT x ) = 40/63 = 0.6349. The critical value from Table 4 for sample sizes 7 and 9 at a 0.05 significance level is 5/9=0.5556. The test statistic is larger than the critical value, hence the null hypothesis is rejected and the efficacy of chemotherapy is supported at the 5%. The critical value corresponding to the 1% significance level is 5/7=0.7143. The test statistic is smaller than this critical value, indicating that the null is sustained at the 1% and no efficacy can be concluded. The analysis done in SAS leads to the identical conclusion. The code is

data tumors; input group $ diameter @@; cards; Tx 2.5 Cx -0.9 Tx 2.4 Cx 1.5 Tx 2.1 Cx 2.3 Tx 3.4 Cx -1.6 Tx 4.2 Cx -3.4 Tx 1.1 Cx 0.3 Tx 1.9 Cx 2.0 Cx -1.1 Cx 1.6 18

; proc npar1way data=tumors; class group; var diameter; exact; run; The relevant output is Kolmogorov-Smirnov Two-Sample Test D- = max (F2 - F1) 0.6349 Exact Pr >= D0.0275 The P-value is between 0.01 and 0.05, which leads to the same conclusion as above.

Exercise 1.14 A two-sided Kolmogorov-Smirnov test with the alternative H1 : FU CLA (t) 6= FConstant (t) for some t would be appropriate here. The test statistic is computed with the assistance of the following calculations: Score FˆU CLA 27 1/10 31 2/10 37 3/10 40 4/10 55 4/10 56 4/10 57 5/10 60 5/10 63 7/10 64 7/10 67 7/10 68 7/10 78 7/10 81 8/10 90 9/10 94 1

FˆConstant 0 0 0 0 1/7 2/7 2/7 3/7 3/7 4/7 5/7 6/7 1 1 1 1

|FˆU CLA − FˆConstant | 1/10 2/10 3/10 4/10 18/70 8/70 15/70 5/70 19/70 9/70 1/70 11/70 3/10 2/10 1/10 0

The test statistic D = max |FˆU CLA − FˆConstant | = 4/10. The critical value from Table 4, corresponding to a two-tailed test with sample sizes 7 and 10 and the significance level 0.05, is 43/70. Our test statistic is smaller than 19

that, therefore, the null hypothesis is not rejected, and equality of distribution functions is concluded. To confirm this result, we run the SAS code: data shoulder_score; input scale $ score @@; cards; UCLA 27 Constant 56 UCLA 37 Constant 78 UCLA 40 Constant 60 UCLA 63 Constant 55 UCLA 31 Constant 67 UCLA 81 Constant 68 UCLA 63 Constant 64 UCLA 57 UCLA 90 UCLA 94 ; proc npar1way data=shoulder_score; class scale; var score; exact; run; The test statistic and the exact P-value are Kolmogorov-Smirnov Two-Sample Test D = max |F1 - F2| 0.4000 Exact Pr >= D 0.4075 Since the P-value is larger than 0.05, the null should not be rejected.

20

Chapter 2 Exercise 2.1 For each student, the ranks of the observations are: Student Month 1 1 2 Rank 1 2 0 Rank 1 3 4 Rank 1.5 4 3 Rank 2 5 0 Rank 1.5 6 4 Rank 2 7 5 Rank 3.5 Rank Total 12.5

Month 2 4 3.5 1 2 5 3 3 2 0 1.5 3 1 5 3.5 16.5

Month 3 3 2 3 3 4 1.5 4 4 1 3 5 3.5 4 2 19.0

Month 4 4 3.5 4 4 7 4 3 2 3 4 5 3.5 2 1 22.0

Some observations are tied, therefore, formula (2.2) will be used to calculate the Friedman rank test statistics. We have n = 7, k = 4, R1 = 12.5, R2 = P 16.5, R3 = 19.0, R4 = 22.0, and the sum of squares of all ranks is P 7 4 2 i=1 j=1 rij = 205.5. The test statistic is given as i h 2 2 2 2 2 7(4 − 1) (1/7) 12.5 + 16.5 + 19.0 + 22.0 − (7)(4)(4 + 1) /4 Q= = 4.7705. 205.5 − (7)(4)(4 + 1)2 /4 Using Table 5 with k = 4, n = 7, and α = 0.05, we obtain the critical value of 7.8. Since the observed statistic doesn’t exceed the critical value, we fail to reject the null hypothesis H0 : θmonth 1 = θmonth 2 = θmonth 3 = θmonth 4 . The conclusion is that the location parameters for the distribution of the number of books read don’t differ by month. In SAS, we type the code: data books; input student month response @@; cards; 1 1 2 1 2 4 1 3 3 1 4 4 2 1 0 2 2 1 2 3 3 2 4 4 3 1 4 3 2 5 3 3 4 3 4 7 4 1 3 4 2 3 4 3 4 4 4 3 21

5 1 0 5 2 0 5 3 1 5 4 3 6 1 4 6 2 3 6 3 5 6 4 5 7 1 5 7 2 5 7 3 4 7 4 2 ; proc sort data=books; by student; run; proc rank data=books out=ranked; var response; by student; ranks rank; run; proc freq data=ranked; table student*month*rank/noprint cmh; run;

The test statistic and P-value produced by SAS are

Alternative Hypothesis Row Mean Scores Differ

Value 4.7705

Prob 0.1894

The large P-value indicates that H0 should not be rejected. This is in sync with our previous conclusion.

22

Exercise 2.2 Within-customer ranks of the data are: Customer Letter 1 0.4 Rank 3 2 0.8 Rank 3 3 0.5 Rank 3 4 0.7 Rank 3 5 0.6 Rank 3 6 0.6 Rank 3 7 0.6 Rank 3 8 0.7 Rank 3 Rank Total 24

Phone 0.3 2 0.4 2 0.4 2 0.6 2 0.3 2 0.5 2 0.4 2 0.6 2 16

Text 0.1 1 0.3 1 0.1 1 0.2 1 0.2 1 0.4 1 0.3 1 0.2 1 8

Because no ties are observed, the Friedman rank test statistic is computed according to (2.1). For k = 3, n = 8, R1 = 24, R2 = 16, and R3 = 8, Q=

12(242 + 162 + 82 ) − (3)(8)(3 + 1) = 16 . (8)(3)(3 + 1)

From Table 5, the critical value corresponding to α = 0.01 is 9, thus, we reject the null hypothesis. The conclusion is that the three methods of customer contact differ. The SAS below outputs the test statistic and the P-value. data TVcustomers; input customer $ cards; 1 letter 0.4 1 phone 2 letter 0.8 2 phone 3 letter 0.5 3 phone 4 letter 0.7 4 phone 5 letter 0.6 5 phone 6 letter 0.6 6 phone 7 letter 0.6 7 phone 8 letter 0.7 8 phone ;

contact $ probability @@; 0.3 0.4 0.4 0.6 0.3 0.5 0.4 0.6

1 2 3 4 5 6 7 8

text text text text text text text text

0.1 0.3 0.1 0.2 0.2 0.4 0.3 0.2

23

proc sort data=TVcustomers; by customer; run; proc rank data=TVcustomers out=ranked; var probability; by customer; ranks rank; run; proc freq data=ranked; table customer*contact*rank/noprint cmh; run; The output is Alternative Hypothesis Row Mean Scores Differ

Value 16.0000

Prob 0.0003

The small P-value supports our conclusion that the methods differ significantly. To investigate which methods differ, we conduct a pairwise comparison using the Wilcoxon signed-rank test. The pairwise differences in observations are shown in the table below. Customer 1 2 3 4 5 6 7 8

Letter-Phone 0.1 0.4 0.1 0.1 0.3 0.1 0.2 0.1

Letter-Text 0.3 0.5 0.4 0.5 0.4 0.2 0.3 0.5

Phone-Text 0.2 0.1 0.3 0.4 0.1 0.1 0.1 0.4

To test whether differences between pairs of methods exist, for each column of ranks compute the test statistic T = min(T + , T − ) = 0. The critical value for n = 8, the two-sided alternative, and significance level α = 0.01 is 0. This means that the three methods are all statistically different at a 0.01 level of significance. Running SAS produces the same result. The code is

24

data contact; input letter phone text; diff lp=letter-phone; diff lt=letter-text; diff pt=phone-text; cards; 0.4 0.3 0.1 0.8 0.4 0.3 0.5 0.4 0.1 0.7 0.6 0.2 0.6 0.3 0.2 0.6 0.5 0.4 0.6 0.4 0.3 0.7 0.6 0.2 ; proc univariate data=contact; var diff lp diff lt diff pt; run; The output is as follows: • All the three test statistics are equal to S = n(n + 1)/4 − T − = 8(8 + 1)/4 − 0 = 18 − 0 = 18. Test Signed Rank

- Statistic S 18

- - - - p Value - - - Pr >= |S| 0.0078

The conclusion coincides with the one drawn by hand. The p-value is 0.0078 < 0.05, thus we reject each of the three null hypotheses and conclude that all the three methods of customer contact differ.

Exercise 2.3 The ranks are assigned as follows: Fish Pond A Rank B Rank C Rank

3 1 10 12.5 4 3

Lead Contents (in 4 4 5 3 3 5.5 11 11 12 14.5 14.5 16 5 6 6 5.5 7.5 7.5

ppb) 7 9 15 17 9 11

Total 8 10 18 18 10 12.5

31.5 92.5 47

Since tied ranks are assigned, we will be using the definition given by (2.4) to 25

compute the Kruskal-Wallis test statistic. We know that n1 = n2 = n3 = 6, N = n1 + n2 + n3 = 18, R1 = 31.5, R2 = 92.5, and R3 = 47. The number of tied 4’s is T1 = 3, 5’s is T2 = 2, 6’s is T3 = 2, 10’s is T4 = 2, and 11’s is T5 = 2. The denominator in (2.4) is equal to 1−

(33 − 3) + (23 − 2) + (23 − 2) + (23 − 2) + (23 − 2) = 0.99174, 183 − 18

and, hence, the H-statistic is derived as i h 12 31.52 /6 + 92.52 /6 + 472 /6 − 3(18 + 1) /0.00826 = 11.8552. H= (18)(18 + 1) From Table 6, the critical value corresponding to n1 = n2 = n3 = 6 and α = 0.05 is 5.719. Thus, we reject the null hypothesis and conclude that the fish ponds differ in lead contents. We run the following code in SAS: data lead_contents; input pond $ lead @@; cards; A 3 A 4 A 4 A 5 A 7 A 8 B 10 B 11 B 11 B 12 B 15 B 18 C 4 C 5 C 6 C 6 C 9 C 10 ; proc npar1way data=lead_contents wilcoxon; class pond; var lead; exact; run; The output is Kruskal-Wallis Test Chi-Square 11.8552 Exact Pr >= Chi-Square 2.379E-04 Since the P-value is very small, H0 is rejected. Next, to see which ponds differ in lead contents, we conduct pairwise two-tailed Wilcoxon rank-sum tests.

26

• To compare ponds A and C, we write Fish Pond A Rank C Rank

Lead Contents (in ppb) Total 3 4 4 5 7 8 1 3 3 5.5 9 10 31.5 4 5 6 6 9 10 3 5.5 7.5 7.5 11 12 46.5

The test statistic is W = 31.5, the sum of the ranks in the first sample since samples are of equal sizes. The lower-tailed critical value from Table 2 for n1 = n2 = 6 and α = 0.05 is WL = 26 and the upper-tailed one is WU = 52. Since WL < W < WU , the null hypothesis should not be rejected. We conclude at the 5% significance level that there is no difference in lead content between ponds A and C. • To compare ponds B and C, we write Fish Pond B Rank C Rank

Lead Contents (in ppb) Total 10 11 11 12 15 18 6.5 8.5 8.5 10 11 12 56.5 4 5 6 6 9 10 1 2 3.5 3.5 5 6.5 21.5

The test statistic is W = 56.5, the sum of the ranks in the first sample since the sizes of the two samples are the same. The upper-tailed critical value from Table 2 for n1 = n2 = 6 and α = 0.01 is WU = 55. Thus, the null hypothesis should be rejected even at the 1% level of significance. We conclude that the lead content in pond B differs from that in pond C. The code in SAS that does the pairwise testing is given as proc npar1way data=lead_contents wilcoxon; class pond; var lead; exact; where (pond ne ’B’); run; proc npar1way data=lead_contents wilcoxon; class pond; var lead; exact; where (pond ne ’A’); run; 27

The relevant output is • for testing A vs. C Wilcoxon Two-Sample Test Statistic (S) 31.5000 Exact Test Two-Sided Pr >= |S - Mean| 0.2489 • for testing B vs. C Wilcoxon Two-Sample Test Statistic (S) 56.5000 Exact Test Two-Sided Pr >= |S - Mean| 0.0043 The P-value is larger than 0.05 when comparing A and C, and is less than 0.01 when comparing B and C. These results yield the same conclusion as above.

Exercise 2.4 After assigning ranks, we get 24◦ C 88 54 65 55 Total

Rank 15 1 3 2 21

28◦ C 67 72 76 80

32◦ C 93 82 84 78

Rank 4 5 7 9 25

Rank 16 11 12 8 47

36◦ C Rank 86 13 87 14 81 10 73 6 43

There are no ties among the ranks, therefore, (2.3) will be used for computation of the Kruskal-Wallis test statistic. We have that n1 = n2 = n3 = n4 = 4, N = 16, R1 = 21, R2 = 25, R3 = 47, and R4 = 43. The H-statistics is H=

12(212 /4 + 252 /4 + 472 /4 + 432 /4) − 3(16 + 1) = 5.5147. 16(16 + 1)

Next, we look up the critical value in Table 6. For α = 0.05 and the sample sizes 4, 4, 4, and 4, the critical value is 7.235. The observed test statistic is smaller than the critical value, indicating that the null hypothesis should not be rejected. The conclusion is that the germination rates don’t differ. No post-hoc pairwise comparison is necessary. The code in SAS is:

28

data germination; input temperature $ rate @@; cards; 24 88 24 54 24 65 24 55 28 67 28 72 28 76 28 80 32 93 32 82 32 84 32 78 36 86 36 87 36 81 36 73 ; proc npar1way data=germination wilcoxon; class temperature; var rate; exact; run; The output is Kruskal-Wallis Test Chi-Square Exact Pr >= Chi-Square

5.5147 0.1349

The P-values is larger than 0.05, hence the null is not rejected.

29

Chapter 3 Exercise 3.1 The ranked data are: Price of Gas Per Gallon Rank, x $1.78 1 $2.11 3 $2.01 2 $2.17 4 $2.45 5 $2.76 6 $3.12 7 $3.24 8.5 $3.56 11 $3.70 12 $3.42 10 $3.24 8.5

Price of Milk Per Gallon Rank, y $1.30 1 $1.70 2 $1.88 3 $2.15 4 $2.20 6 $2.25 7 $2.19 5 $2.45 8 $2.87 9 $2.99 10 $3.15 12 $3.06 11

x−y 0 1 -1 0 -1 -1 2 .5 2 2 -2 -2.5

There are two tied observations of x, thus, Tx = 23 − 2 = 6. There are no tied values of y, so Ty = 0. To compute the Spearman correlation coefficient n X rs , we use (3.4) with n = 12, Tx = 6, Ty = 0, and (xi − yi )2 = 26.5, i=1

Pn

n3 − n − 6 i=1 (xi − yi )2 − (Tx + Ty )/2 rs = p (n3 − n)2 − (Tx + Ty )(n3 − n) + Tx Ty 123 − 12 − 6(26.5) − (6 + 0)/2 = 0.9072. =p (123 − 12)2 − (6 + 0)(123 − 12) + (6)(0) To test H0 : ρ = 0 against H1 : ρ > 0, we look up in Table 7 the critical value that corresponds to a one-tailed test with n = 12 and α = 0.01. The critical value is 0.671. Therefore, we reject the null at the 1% significance level and conclude that positive correlation exists between average annual gasoline and milk prices in that geographic area. To do the analysis in SAS, we type the following code: data prices; input gasoline milk; cards; 1.78 1.30 2.11 1.70 30

2.01 2.17 2.45 2.76 3.12 3.24 3.56 3.70 3.42 3.24 ;

1.88 2.15 2.20 2.25 2.19 2.45 2.87 2.99 3.15 3.06

proc freq data=prices; table gasoline*milk; exact scorr; run; It produces the correlation coefficient and the exact P-value. Spearman Correlation Coefficient Correlation (r) 0.9072 Exact Test One-sided Pr >= r 5.824E-05 To compute the approximate P-value, we run this code proc corr data=prices spearman; var gasoline milk; run; The output is Spearman Correlation Coefficients, N = 12 Prob > |r| under H0: Rho=0 gasoline milk gasoline 1.00000 0.90718 ChiSq age_at_inauguration -0.13058 0.0010

Obs 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

age_at_ inauguration 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895

lifespan 0.0 53.6 57.1 57.7 60.2 60.5 63.2 64.4 64.9 65.6 66.6 67.1 67.6 67.8 68.1 70.3 71.3 71.8 72.5 73.2 74.2 77.1 78.2 78.5 89

s 1.00000 0.97779 0.95410 0.93008 0.90529 0.87736 0.82107 0.78935 0.75759 0.72368 0.69081 0.65813 0.62561 0.59311 0.56098 0.53028 0.49933 0.46491 0.42841 0.38973 0.35238 0.31114 0.27361 0.23885

25 26 27 28 29 30 31 32 33 34

55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895 55.3895

79.6 80.6 81.3 83.2 85.3 88.6 90.2 90.7 93.3 93.5

0.20685 0.17401 0.14267 0.11187 0.08331 0.05737 0.03636 0.01447 0.00290 0.00035

The estimated survival function has the following form: irb h b = S(t) S(t) where rb = exp − 0.13058(age at inauguration − 55.3895) . The values of the step function S(t) are given in the last column s. Since the age at inauguration is a continuous variable, the estimated coefficient has the following interpretation. If the age at inauguration increases by one year, the hazard of dying changes by 100(exp{−0.13058} − 1)% = −12.24%, that is, decreases by 12.24%. (c) Thomas Jefferson was inaugurated at the age of 57.9 years and lived for 83.2 years. The estimated survival function h iexp{−0.13058(57.9−55.3895)} 0.72049 b S(83.2) = S(83.2) = 0.11187 = 0.20635, that is, according to the fitted Cox model, he had roughly a 21% chance of surviving longer than he did.

Exercise 6.6 (a) Fitting of the full model is done by submitting the code: data valve_replacement; input age_at_surgery gender $ valve_diameter nyha_class $ duration_years censored @@; nyha_I=(nyha_class=’I’); nyha_II=(nyha_class=’II’); nyha_III=(nyha_class=’III’); male=(gender=’M’); cards; 90

27 45 38 45 47 49 50 52 54 55 56 57 61 63 66 67 68 70 72 73 ;

F F F M F M M F M F F F F F M M F F M F

19 23 21 29 21 25 25 23 19 21 19 21 19 21 23 21 21 23 25 19

III IV I II III I II II III IV II I II II II IV III III II III

0.9 0 0.5 0.8 1.3 3.1 1 4.4 2.8 3.7 1.7 0 0.1 3.3 2.8 3.9 1.1 0 1.1 1.5 2.8 1 1.6 0 3.2 3.1 0

56 0 0 0 48 0 1 1 55 0 1 0 1 64 0 0 69 71 1 74

F 38 44 46 M 49 52 53 F 56 57 60 62 M 66 68 F F 73 F

19 M F M 29 F F F 19 M M F F 29 F M 25 23 F 23

III 25 19 27 II 27 21 19 II 23 23 23 21 II 23 25 II II 23 II

0.9 0 II 2.0 II 1.7 I 4.1 3.6 1 II 3.2 I 4.2 I 2.1 2.5 0 I 2.1 III 1.6 0 I 1.0 II 1.7 4.1 1 I 4.9 II 0.8 4.8 1 3.8 1 I 1.2 3.3 1

0 0 1 0 1 0 1 0 0 1 0

0

proc phreg data=valve_replacement outest=betas; model duration_years*censored(1)=age_at_surgery male valve_diameter nyha_I nyha_II nyha_III; baseline out=outdata survival=Sbar; run; proc print data=betas; run; proc print data=outdata; run; The results are

Parameter Parameter Estimate age_at_surgery -0.04397 male -0.28846 valve_diameter -0.06791 nyha_I -2.81505

Pr > ChiSq 0.0365 0.6093 0.4822 0.0016 91

nyha_II nyha_III age_at_ surgery 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9

-2.91527 -2.27741

male 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35

valve_ diameter 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6 22.6

0.0008 0.0080

nyha_I 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25

duration_ nyha_II nyha_III years Sbar 0.475 0.2 0.0 1.00000 0.475 0.2 0.1 0.98852 0.475 0.2 0.5 0.97441 0.475 0.2 0.8 0.93332 0.475 0.2 0.9 0.89182 0.475 0.2 1.0 0.86682 0.475 0.2 1.1 0.81795 0.475 0.2 1.2 0.79385 0.475 0.2 1.3 0.77018 0.475 0.2 1.5 0.74678 0.475 0.2 1.6 0.68906 0.475 0.2 1.7 0.60488 0.475 0.2 2.0 0.57396 0.475 0.2 2.1 0.54284 0.475 0.2 2.5 0.51000 0.475 0.2 2.8 0.47703 0.475 0.2 3.1 0.44001 0.475 0.2 3.2 0.39366 0.475 0.2 4.4 0.21794

Judging by the P-values, male and valve diameter are insignificant predictors at the 5% level. The fitted survival model takes the form: h irb b S(t) = S(t) where rb = exp − 0.04397(age at surgery − 56.9) − 0.28846(male − 0.35) − 0.06791(valve diameter−22.6)−2.81505(nyha I−0.25)−2.91527(nyha II− 0.475) − 2.27741(nyha III − 0.2) . The estimates of the values of the step function S(t) are given in the column Sbar. (b) The estimated regression coefficients may be interpreted as follows. • age at surgery: If the age at surgery increases by one year, the hazard of dying or experiencing a complication changes by 100 exp{−0.04397} − 1 % = −4.30%, that is, decreases by 4.30%. • male: The hazard function for males is 100 exp{−0.28846} % = 74.94% of that for females. 92

• valve diameter: This variable was fit as continuous, thus, when interpreting, we say that if the valve diameter were to increase by 1mm, the hazard function would change by 100 exp{−0.06791} − 1 % = −6.57%, that is, decreases by 6.57%. • nyha I: The hazard function for class I patients is 100 exp{−2.81505} % = 5.99% of that for class IV patients. • nyha II: The hazard function for class II patients is 100 exp{−2.91527} % = 5.42% of that for class IV patients. • nyha III: The hazard function for class III patients is 100 exp{−2.27741} % = 10.25% of that for class IV patients. (c) We are given that t = 1 year, age at surgery = 64, male =0, valve diameter = 21, nyha I=0, nyha II=0, and nyha III=1. The estimated relative risk for this patient is rb = exp − 0.04397(64 − 56.9) − 0.28846(0 − 0.35) − 0.06791(21−22.6)−2.81505(0−0.25)−2.91527(0−0.475)−2.27741(1−0.2) = 1.17824, and thus the survival beyond one year is estimated as b = (0.86682)1.17824 = 0.84502, S(1) which means about 84.5% chance of survival. (d) The code that utilizes the CLASS statement is as follows: proc phreg data=valve_replacement outest=betas; class gender (ref=’F’) nyha_class/param=ref; model duration_years*censored(1)=age_at_surgery gender valve_diameter nyha_class; baseline out=outdata survival=Sbarbar; run; proc print data=betas; run; proc print data=outdata; run; From SAS output, the estimates of the regression coefficients and the survival function are

93

Parameter age_at_surgery gender M valve_diameter nyha_class I nyha_class II nyha_class III age_at_ surgery 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9 56.9

Parameter Estimate Pr > ChiSq -0.04397 0.0365 -0.28846 0.6093 -0.06791 0.4822 -2.81505 0.0016 -2.91527 0.0008 -2.27741 0.0080

valve_ nyha_ diameter gender class 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV 22.6 F IV

duration_ years 0.0 0.1 0.5 0.8 0.9 1.0 1.1 1.2 1.3 1.5 1.6 1.7 2.0 2.1 2.5 2.8 3.1 3.2 4.4

Sbarbar 1.00000 0.84994 0.69415 0.37840 0.19940 0.13362 0.05901 0.03873 0.02529 0.01637 0.00527 0.00084 0.00040 0.00018 0.00008 0.00003 0.00001 0.00000 0.00000

The estimated survival function has the form h irb SbT (t) = S(t) where rb = exp − 0.04397(age at surgery − 56.9) − 0.28846 male − 0.06791(valve diameter − 22.6) − 2.81505 nyha I − 2.91527 nyha II − 2.27741 nyha III . Finally, we verify that the following identity holds: h iexp{(−0.28846)(0.35)+(−2.81505)(0.25)+(−2.91527)(0.475)+(−2.27741)(0.2)} h i0.071008 S(t) = S(t) = S(t) . 94

Indeed, the two sides of the identity coincide for all but a few largest values of time t (due to small values of S(t)). h i0.071008 S(t) S(t) 1.00000 0.98852 0.97441 0.93332 0.89181 0.86682 0.81795 0.79385 0.77019 0.74676 0.68902 0.60478 0.57374 0.54212 0.51178 0.47735 0.44153 0.00000 0.00000

1.00000 0.98852 0.97441 0.93332 0.89182 0.86682 0.81795 0.79385 0.77018 0.74678 0.68906 0.60488 0.57396 0.54284 0.51000 0.47703 0.44001 0.39366 0.21794

95

Chapter 7 Exercise 7.1 (a) The following SAS code requests a histogram and kernel density estimator. data Old_Faithful; input minutes @@; cards; 65 82 84 54 85 58 79 57 88 68 76 78 74 85 75 65 76 58 83 50 87 78 78 74 66 84 84 98 93 59 ; proc univariate data=Old_Faithful; histogram minutes/cfill=gray kernel(color=black); run; The graph is

The histogram appears to be unimodal, slightly skewed to the left. The density estimate corresponds to the optimal standardized bandwidth c=0.7852 96

(a bandwidth of 7.5563). The density curve is unimodal and a bit left-skewed. (b) The following lines of code produce a histogram with the midpoints 45, 50, 55, . . . , 100, and kernel density curves with normal kernel (c=0.4, 0.5, 0.6, 0.7), quadratic kernel (c= 0.8, 0.9, 1.0, 1.1), and triangular kernel (c=0.9 1.0, 1.1, 1.2).

title ’c = 0.4 0.8 0.9’; proc univariate data=Old_Faithful; histogram minutes/cfill=gray midpoints= 45 to 100 by 5 kernel(c=0.4 0.8 0.9 k=normal quadratic triangular color=black l=1 2 3) name=’graph1’; run; title ’c = 0.5 0.9 1.0’; proc univariate data=Old_Faithful; histogram minutes/cfill=gray midpoints= 45 to 100 by 5 kernel(c=0.5 0.9 1 k=normal quadratic triangular color=black l=1 2 3) name=’graph3’; run; title ’c = 0.6 1.0 1.1’; proc univariate data=Old_Faithful; histogram minutes/cfill=gray midpoints= 45 to 100 by 5 kernel(c=0.6 1 1.1 k=normal quadratic triangular color=black l=1 2 3) name=’graph2’; run; title ’c = 0.7 1.1 1.2’; proc univariate data=Old_Faithful; histogram minutes/cfill=gray midpoints= 45 to 100 by 5 kernel(c=0.7 1.1 1.2 k=normal quadratic triangular color=black l=1 2 3) name=’graph4’; run; proc greplay igout=work.gseg tc=sashelp.templt template=l2r2 nofs; treplay 1:graph1 2:graph3 3:graph2 4:graph4; run; 97

After a visual comparison, we arrive at the conclusion that the curves on graphs 2 and 3 have a good fit to the data. The data are clearly bimodal which wasn’t captured by the default histogram. The curve with the larger standardized bandwidths are too smooth, and with the smaller bandwidth are too spiky. Exercise 7.2 (a) The histogram is constructed using the code: data heights; input husband wife @@; diff=husband-wife; cards; 66 63 64 68 77 67 66 65 76 67 64 66 75 62 69 67 72 68 64 68 68 64 68 65 80 71 72 62 76 68 72 69 62 62 65 66 72 66 76 69 68 66 78 70 83 70 77 66 65 61 66 66 77 63 73 70 70 64 75 66 98

63 70 75 79 77 73 69 67 77 72 ;

63 68 69 66 64 70 68 70 71 62

63 70 67 75 76 76 69 72 70 74

65 63 61 68 62 62 64 62 61 63

73 76 64 77 63 74 80 64 69 79

70 62 64 65 66 63 64 61 70 64

proc univariate data=heights; histogram diff/cfill=gray kernel(color=black); run; The graphs is

From the graph, the density appears to be unimodal and roughly symmetric. (b) The histogram with midpoints −4, −2, . . . , 16 is constructed and the density curves are fit with the normal kernel (c=0.4, 0.5, 0.6, and 0.7), quadratic kernel (c=1.6, 1.7, 1.8 and 1.9), and triangular kernel (c=1.0, 1.2, 1.4, and 99

1.6). The code is title ’c = 0.4 1.6 1.0’; proc univariate data=heights; histogram diff/cfill=gray midpoints= -4 to 16 by 2 kernel (c=0.4 1.6 1 k=normal quadratic triangular color=black l=1 2 3) name=’graph1’; run; title ’c = 0.5 1.7 1.2’; proc univariate data=heights; histogram diff/cfill=gray midpoints= -4 to 16 by 2 kernel (c=0.5 1.7 1.2 k=normal quadratic triangular color=black l=1 2 3) name=’graph3’; run; title ’c = 0.6 1.8 1.4’; proc univariate data=heights; histogram diff/cfill=gray midpoints= -4 to 16 by 2 kernel (c=0.6 1.8 1.4 k=normal quadratic triangular color=black l=1 2 3) name=’graph2’; run; title ’c = 0.7 1.9 1.6’; proc univariate data=heights; histogram diff/cfill=gray midpoints= -4 to 16 by 2 kernel (c=0.7 1.9 1.6 k=normal quadratic triangular color=black l=1 2 3) name=’graph4’; run; proc greplay igout=work.gseg tc=sashelp.templt template=l2r2 nofs; treplay 1:graph1 2:graph3 3:graph2 4:graph4; run; The constructed graphs follow.

100

The histogram and the density curves appear to be unimodal and skewed to the right. The best fit is depicted on the graphs 2 and 3.

Exercise 7.3 (a) The code is data lesions; input length @@; cards; 2.0 4.1 6.7 4.5 5.7 3.3 6.9 5.2 2.7 2.9 1.7 5.4 5.7 4.8 4.9 0.4 4.2 7.3 7.1 4.4 3.2 5.6 6.5 7.3 4.2 0.9 4.0 5.0 3.8 4.1 3.5 6.0 6.2 7.3 4.2 1.1 6.6 6.7 3.1 5.8 2.0 4.5 5.7 5.5 4.6 ;

0.3 3.6 2.4 3.0 2.4 2.9 1.2 3.0

6.3 2.5 3.8 2.9 7.2 6.3 4.2 5.6

3.5 3.8 5.3 3.4 7.1 3.4 3.5 6.6

6.9 2.9 6.5 3.9 5.3 6.6 6.7 4.3

2.5 3.6 6.5 3.2 5.4 4.1 3.1 6.1

0.9 0.9 3.2 0.6 0.3 0.3 2.4 1.9

6.5 6.3 5.3 6.8 3.5 6.4 3.3 6.5

6.8 3.1 4.4 6.1 6.1 4.9 4.8 7.0

proc univariate data=lesions; histogram length/cfill=gray kernel(color=black); 101

5.4 3.7 5.4 3.0 2.8 5.9 7.1 7.0

5.4 2.4 6.4 7.1 2.7 4.8 2.7 3.5

run; The graph is

The histogram appears roughly symmetric and possibly bimodal. (b) The code is

title ’c = 0.4 0.9 1.0’; proc univariate data=lesions; histogram /cfill=grey midpoints=1 to 8 by .5 kernel(c=0.4 0.9 1 k=normal quadratic triangular color=black l=1 2 3) name=’graph1’; run; title ’c = 0.5 1.0 1.2’; proc univariate data=lesions; histogram /cfill=grey midpoints=1 to 8 by .5 kernel(c=0.5 1 1.2 k=normal quadratic triangular color=black l=1 2 3) name=’graph3’; run;

102

title ’c = 0.6 1.1 1.4’; proc univariate data=lesions; histogram /cfill=grey midpoints=1 to 8 by .5 kernel(c=0.6 1.1 1.4 k=normal quadratic triangular color=black l=1 2 3) name=’graph2’; run; title ’c = 0.7 1.2 1.6’; proc univariate data=lesions; histogram /cfill=grey midpoints=1 to 8 by .5 kernel(c=0.7 1.2 1.6 k=normal quadratic triangular color=black l=1 2 3) name=’graph4’; run; proc greplay igout=work.gseg tc=sashelp.templt template=l2r2 nofs; treplay 1:graph1 2:graph3 3:graph2 4:graph4; run; The four graphs are given below. The graphs 2 and 3 fit the data well.

103

Chapter 8 Exercise 8.1 The code below produces the 95% confidence interval for the mean based on the t-distribution via the jackknife procedure. data Old_Faithful; input minutes @@; cards; 65 82 84 54 85 58 79 57 88 68 76 78 74 85 75 65 76 58 83 50 87 78 78 74 66 84 84 98 93 59 ; proc univariate noprint data=Old_Faithful; var minutes; output out=stats n=n_obs mean=mean_all; run; data _null_; set stats; call symput (’n_obs’, n_obs); call symput (’mean_all’, mean_all); run; data jackknife_samples; do sample=1 to &n_obs; do record=1 to &n_obs; set Old_Faithful point=record; if sample ne record then output; end; end; stop; run; proc univariate noprint data=jackknife_samples; var minutes; by sample; output out=jackknife_replicates mean=mean_minutes; run; 104

data jackknife_replicates; set jackknife_replicates; pseudomean=&n_obs*&mean_all-(&n_obs-1)*mean_minutes; by sample; keep pseudomean; run; proc means noprint data=jackknife_replicates; var pseudomean; output out=mean_CI LCLM=mean_95_CI_lower UCLM=mean_95_CI_upper; run; proc print data=mean_CI; run; The result is as follows: mean_95_ CI_lower 70.1424

mean_95_ CI_upper 79.2576

Exercise 8.2 The following code applies the jackknife method to construct a 99% confidence interval for the Pearson correlation coefficient between heights of husbands and heights of their wives. data heights; input husband wife @@; cards; 66 63 64 68 77 67 66 64 66 75 62 69 67 72 68 64 68 65 80 71 72 72 69 62 62 65 66 72 68 66 78 70 83 70 77 66 66 77 63 73 70 70 63 63 63 65 73 70 70 76 62 75 69 67 61 64 75 68 77 65 77 64 76 73 70 76 62 74 63 69

65 68 62 66 66 64 68 64 62 68

76 64 76 76 65 75 70 79 63 69 105

67 68 68 69 61 66 63 66 66 64

80 70 ;

64 61

67 69

70 70

72 72

62 62

64 74

61 63

77 79

71 64

proc corr noprint data=heights outp=corr_all; var husband wife; run; data _null_; set corr_all; if (_type_=’N’) then call symput (’n_obs’, husband); if (_type_=’CORR’ and _name_=’husband’) then call symput (’corr_all’, wife); run; data jackknife_samples; do sample=1 to &n_obs; do record=1 to &n_obs; set heights point=record; if sample ne record then output; end; end; stop; run; proc corr noprint data=jackknife_samples outp=jackknife_replicates; var husband wife; by sample; run; data jackknife_replicates; set jackknife_replicates; if(_type_=’CORR’ and _name_=’husband’); pseudo_corr=&n_obs*&corr_all-(&n_obs-1)*wife; keep pseudo_corr; run; proc means noprint data=jackknife_replicates alpha=0.01; var pseudo_corr; output out=CI lclm=CI_99_lower uclm=CI_99_upper; run; 106

proc print data=CI; run; The output is CI_99_ lower -0.076537

CI_99_ upper 0.55789

Exercise 8.3 Using the code below, the 95% CI for the Spearman correlation coefficient between gasoline and milk prices is computed based on the jackknife method. data prices; input gasoline cards; 1.78 1.30 2.11 2.76 2.25 3.12 3.42 3.15 3.24 ;

milk @@; 1.70 2.01 1.88 2.17 2.15 2.45 2.20 2.19 3.24 2.45 3.56 2.87 3.70 2.99 3.06

proc corr noprint data=prices outs=corr_all; var gasoline milk; run; data _null_; set corr_all; if (_type_=’N’) then call symput (’n_obs’, gasoline); if (_type_=’CORR’ and _name_=’gasoline’) then call symput (’corr_all’, milk); run; data jackknife_samples; do sample=1 to &n_obs; do record=1 to &n_obs; set prices point=record; if sample ne record then output; end; end; 107

stop; run; proc corr noprint data=jackknife_samples outs=jackknife_replicates; var gasoline milk; by sample; run; data jackknife_replicates; set jackknife_replicates; if(_type_=’CORR’ and _name_=’gasoline’); pseudo_corr=&n_obs*&corr_all-(&n_obs-1)*milk; keep pseudo_corr; run; proc means noprint data=jackknife_replicates alpha=0.05; var pseudo_corr; output out=CI lclm=CI_95_lower uclm=CI_95_upper; run; proc print data=CI; run; The interval is given as CI_95_ lower 0.79723

CI_95_ upper 1.08750

Note that this interval extends beyond the largest possible value of one, therefore, in practice, the interval [0.79723,1.00000] may be used.

Exercise 8.4 A 90% confidence interval for the variance based on the jackknife estimation method is computed by the following code: data lesions; input length @@; cards; 2.0 4.1 6.7 4.5 5.7 0.3 6.3 3.5 6.9 2.5 0.9 6.5 6.8 5.4 5.4 3.3 6.9 5.2 2.7 2.9 3.6 2.5 3.8 2.9 3.6 0.9 6.3 3.1 3.7 2.4 1.7 5.4 5.7 4.8 4.9 2.4 3.8 5.3 6.5 6.5 3.2 5.3 4.4 5.4 6.4 108

0.4 3.2 0.9 3.5 1.1 2.0 ;

4.2 5.6 4.0 6.0 6.6 4.5

7.3 6.5 5.0 6.2 6.7 5.7

7.1 7.3 3.8 7.3 3.1 5.5

4.4 4.2 4.1 4.2 5.8 4.6

3.0 2.4 2.9 1.2 3.0

2.9 7.2 6.3 4.2 5.6

3.4 7.1 3.4 3.5 6.6

3.9 5.3 6.6 6.7 4.3

3.2 5.4 4.1 3.1 6.1

0.6 0.3 0.3 2.4 1.9

6.8 3.5 6.4 3.3 6.5

6.1 6.1 4.9 4.8 7.0

3.0 2.8 5.9 7.1 7.0

7.1 2.7 4.8 2.7 3.5

proc univariate noprint data=lesions; var length; output out=stats n=n_obs var=var_all; run; data _null_; set stats; call symput (’n_obs’, n_obs); call symput (’var_all’, var_all); run; data jackknife_samples; do sample=1 to &n_obs; do record=1 to &n_obs; set lesions point=record; if sample ne record then output; end; end; stop; run; proc univariate noprint data=jackknife_samples; var length; by sample; output out=jackknife_replicates var=var_length; run; data jackknife_replicates; set jackknife_replicates; pseudovariance=&n_obs*&var_all-(&n_obs-1)*var_length; by sample; keep pseudovariance; run;

109

proc means noprint data=jackknife_replicates alpha=0.1; var pseudovariance; output out=CI LCLM=CI_90_lower UCLM=CI_90_upper; run; proc print data=CI; run; The interval is

CI_90_ lower 2.98429

CI_90_ upper 4.16074

Exercise 8.5 The SAS code that computes the 95% Efron’s percentile confidence interval for the true value of the mean is as follows:

proc surveyselect data=Old_Faithful out=bootstrap_samples outhits seed=31415926 method=urs samprate=1 rep=1000; run; proc univariate noprint data=bootstrap_samples; var minutes; by replicate; output out=bootstrap_replicates mean=mean_minutes; run; proc univariate data=bootstrap_replicates; var mean_minutes; output out=mean_CI pctlpre=CI_95_ pctlpts=2.5 97.5 pctlname=lower upper; run; proc print data=mean_CI; run; The interval produced by SAS is

CI_95_ lower 70.55

CI_95_ upper 78.70 110

This interval is slightly narrower than the one computed in Exercise 8.1 by the jackknife method, [70.1424, 79.2576].

Exercise 8.6 Below is the SAS code that produces the 99% Efron’s percentile confidence interval for the Pearson correlation coefficient between heights of husbands and heights of wives. Recall that for a 99% confidence level, a minimum of 5,000 bootstrap samples must be drawn. proc surveyselect data=heights out=bootstrap_samples outhits seed=5555 method=urs samprate=1 rep=5000; run; proc corr noprint data=bootstrap_samples outp=bootstrap_temp; var husband wife; by replicate; run; data bootstrap_replicates; set bootstrap_temp; if(_type_=’CORR’ and _name_=’wife’); pearson_corr=husband; keep pearson_corr; run; proc univariate data=bootstrap_replicates; var pearson_corr; output out=CI pctlpre=CI_99_ pctlpts=0.5 99.5 pctlname=lower upper; run; proc print data=CI; run; The output is CI_99_ lower -0.087363

CI_99_ upper 0.50471

As computed in Exercise 8.2, the jackknife confidence interval [-0.076537, 0.55789] is wider than the bootstrap one.

111

Exercise 8.7 The 95% Efron’s percentile confidence interval for the Spearman correlation coefficient between the prices of gasoline and milk is constructed using the following code. proc surveyselect data=prices out=bootstrap_samples outhits seed=27182818 method=urs samprate=1 rep=1000; run; proc corr noprint data=bootstrap_samples outs=bootstrap_temp; var gasoline milk; by replicate; run; data bootstrap_replicates; set bootstrap_temp; if(_type_=’CORR’ and _name_=’gasoline’); spearman_corr=milk; keep spearman_corr; run; proc univariate data=bootstrap_replicates; var spearman_corr; output out=CI pctlpre=CI_95_ pctlpts=2.5 97.5 pctlname=lower upper; run; proc print data=CI; run; The interval is CI_95_ lower 0.66727

CI_95_ upper 0.99464

This percentile interval is shorter than the one produced in Exercise 8.3 by the jackknife approach, [0.79723,1.00000].

Exercise 8.8 A 90% Efron’s percentile confidence interval for the population variance of the length of lesions is calculated by running the following SAS code.

112

proc surveyselect data=lesions out=bootstrap_samples outhits seed=111222333 method=urs samprate=1 rep=1000; run; proc univariate noprint data=bootstrap_samples; var length; by replicate; output out=bootstrap_replicates var=var_length; run; proc univariate data=bootstrap_replicates; var var_length; output out=CI pctlpre=CI_90_ pctlpts=5 95 pctlname=lower upper; run; proc print data=CI; run; The answer in this case is CI_90_ lower 3.00262

CI_90_ upper 4.13073

The jackknife interval [2.98429, 4.16074] from Exercise 8.4 is slightly larger than this bootstrap interval.

113