1 / 20

Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA). W&W, Chapter 10. Introduction . Last time we learned about the chi square test for independence, which is useful for data that is measured at the nominal or ordinal level of analysis.

euphemia
Télécharger la présentation

Analysis of Variance (ANOVA)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of Variance (ANOVA) W&W, Chapter 10

  2. Introduction • Last time we learned about the chi square test for independence, which is useful for data that is measured at the nominal or ordinal level of analysis. • If we have data measured at the interval level, we can compare two or more population groups in terms of their population means using a technique called analysis of variance, or ANOVA.

  3. Completely randomized design Population 1 Population 2….. Population k Mean = 1 Mean = 2 ….Mean = k Variance=12 Variance=22 … Variance = k2 We want to know something about how the populations compare. Do they have the same mean? We can collect random samples from each population, which gives us the following data.

  4. Completely randomized design Mean = M1 Mean = M2 ..…Mean = Mk Variance=s12 Variance=s22 …. Variance = sk2 N1 cases N2 cases …. Nk cases Suppose we want to compare 3 college majors in a business school by the average annual income people make 2 years after graduation. We collect the following data (in $1000s) based on random surveys.

  5. Completely randomized design AccountingMarketingFinance 27 23 48 22 36 35 33 27 46 25 44 36 38 39 28 29 32 29

  6. Completely randomized design Can the dean conclude that there are differences among the major’s incomes? Ho: 1 =2 = 3 HA: 1 2  3 In this problem we must take into account: 1) The variance between samples, or the actual differences by major. This is called the sum of squares for treatment (SST).

  7. Completely randomized design 2) The variance within samples, or the variance of incomes within a single major. This is called the sum of squares for error (SSE). Recall that when we sample, there will always be a chance of getting something different than the population. We account for this through #2, or the SSE.

  8. F-Statistic For this test, we will calculate a F statistic, which is used to compare variances. F = SST/(k-1) SSE/(n-k) SST=sum of squares for treatment SSE=sum of squares for error k = the number of populations N = total sample size

  9. F-statistic Intuitively, the F statistic is: F = explained variance unexplained variance Explained variance is the difference between majors Unexplained variance is the difference based on random sampling for each group (see Figure 10-1, page 327)

  10. Calculating SST SST = ni(Mi - )2  = grand mean or =  Mi/k or the sum of all values for all groups divided by total sample size Mi = mean for each sample k= the number of populations

  11. Calculating SST By major Accounting M1=29, n1=6 Marketing M2=33.5, n2=6 Finance M3=37, n3=6  = (29+33.5+37)/3 = 33.17 SST = (6)(29-33.17)2 + (6)(33.5-33.17)2 + (6)(37-33.17)2 = 193

  12. Calculating SST Note that when M1 = M2 = M3, then SST=0 which would support the null hypothesis. In this example, the samples are of equal size, but we can also run this analysis with samples of varying size also.

  13. Calculating SSE SSE = (Xit – Mi)2 In other words, it is just the variance for each sample added together. SSE = (X1t – M1)2 + (X2t – M2)2 + (X3t – M3)2 SSE = [(27-29)2 + (22-29)2 +…+ (29-29)2] + [(23-33.5)2 + (36-33.5)2 +…] + [(48-37)2 + (35-37)2 +…+ (29-37)2] SSE = 819.5

  14. Statistical Output When you estimate this information in a computer program, it will typically be presented in a table as follows: Source of df Sum of Mean F-ratio Variationsquaressquares Treatment k-1 SST MST=SST/(k-1) F=MST Error n-k SSE MSE=SSE/(n-k) MSE Total n-1 SS=SST+SSE

  15. Calculating F for our example F = 193/2 819.5/15 F = 1.77 Our calculated F is compared to the critical value using the F-distribution with F, k-1, n-k degrees of freedom k-1 (numerator df) n-k (denominator df)

  16. The Results For 95% confidence (=.05), our critical F is 3.68 (averaging across the values at 14 and 16 In this case, 1.77 < 3.68 so we must accept the null hypothesis. The dean is puzzled by these results because just by eyeballing the data, it looks like finance majors make more money.

  17. The Results Many other factors may determine the salary level, such as GPA. The dean decides to collect new data selecting one student randomly from each major with the following average grades.

  18. New data AverageAccountingMarketingFinanceM(b) A+ 41 45 51 M(b1)=45.67 A 36 38 45 M(b2)=39.67 B+ 27 33 31 M(b3)=30.83 B 32 29 35 M(b4)=32 C+ 26 31 32 M(b5)=29.67 C 23 25 27 M(b6)=25 M(t)1=30.83 M(t)2=33.5 M(t)3=36.83  = 33.72

  19. Randomized Block Design Now the data in the 3 samples are not independent, they are matched by GPA levels. Just like before, matched samples are superior to unmatched samples because they provide more information. In this case, we have added a factor that may account for some of the SSE.

  20. Two way ANOVA Now SS(total) = SST + SSB + SSE Where SSB = the variability among blocks, where a block is a matched group of observations from each of the populations We can calculate a two-way ANOVA to test our null hypothesis. We will talk about this next week.

More Related