1 / 80

Background Slides for CHEE824

Background Slides for CHEE824. Hypothesis tests For comparison of means Comparison of variances Discussion of power of a hypothesis test - type I and type II errors Joint confidence regions (for the linear case). Hypothesis Tests.

knightjon
Télécharger la présentation

Background Slides for CHEE824

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Background Slides for CHEE824 • Hypothesis tests • For comparison of means • Comparison of variances • Discussion of power of a hypothesis test - type I and type II errors • Joint confidence regions (for the linear case) J. McLellan

  2. Hypothesis Tests … are an alternative approach to confidence limits for factoring in uncertainty in decision-making Approach • make a hypothesis statement • use appropriate test statistic for statement • consider range of values for test statistic that would be likely to occur if hypothesis were true • compare value of test statistic estimated from data to range - if significant, hypothesis is rejected, otherwise hypothesis is accepted J. McLellan

  3. Example Naphtha reformer in a refinery • under old catalyst, octane number was 90 • under new catalyst, average octane number of 92 has been estimated using a sample of 4 data points • standard deviation of octane number in unit is known to be 1.5 • has the octane number improved significantly? • We could use confidence limits to answer this question • for the mean, with known variance • form interval, and see if old value (90) is contained in interval for new mean • consider direct test …hypothesis test J. McLellan

  4. Example Hypothesis test - Null hypothesis Alternate hypothesis • approach • mean is estimated using sample average • if observed average is within reasonable variation limits of old mean, conclude that no significant change has occurred • reference distribution - Standard Normal “status quo” J. McLellan

  5. Example • to compare with Standard Normal, we must standardize • if mean under new catalyst was actually the old mean, thenwould be distributed as a Standard Normal distribution • observed values would vary accordingly • now choose a fence - limit that contains 95% of values of Standard Normal • if observed value exceeds fence, then it is unlikely that the mean under the new catalyst is equal to the old mean • small chance of obtaining an observed average outside this range • if value exceeds fence, reject null hypothesis J. McLellan

  6. Example • Compute test statistic value using observed average of 92: • now determine fence - test at 95% significance level - upper tail area is 0.05 • z = 1.65 • compare: 2.67 > 1.65 -conclude that mean must be significantly higher, since likelihood of obtaining an average of 92 when true mean is 90 is very small We only use the upper tail here, because we are interested in testing to see whether the new mean is greater than the old mean. fence - upper tail area is 0.05 J. McLellan

  7. Example • there is a small chance (0.05) that we could obtain an observed average that would lie outside the fence even though the mean had not changed • in this case, we would erroneously reject the null hypothesis, and conclude that the catalyst had caused a significant increase • referred to as a “Type I error” - false rejection • this would happen 5% of the time • to reduce, move fence further to the extreme of the distribution - reduce upper tail area •  = 0.05 is the “significance level” • (1-  ) is sometimes referred to as the “confidence level” •  is a tuning parameter for the hypothesis test J. McLellan

  8. Hypothesis Tests Review sequence 1) formulate hypothesis 2) form test statistic 3) compare to “fence” value z = 1.65 4) in this case, reject null hypothesis J. McLellan

  9. Types of Hypothesis Tests One-sided tests • null hypothesis - parameter equal to old value • alternate hypothesis - parameter >, < old value • e.g., Two-sided tests • null hypothesis - parameter equal to old value • alternate hypothesis - parameter not equal to old value (could be greater than, less than) • e.g., In two-sided tests, two fences are used (upper, lower), and significance area is split evenly between lower and upper tails. J. McLellan

  10. Hypothesis Tests for Means … with knownvariance Two-Sided Test - at the  significance level Hypotheses: Test Statistic: Fences: Reject H0 if rejection region J. McLellan

  11. Hypothesis Tests for Means … with knownvariance One-Sided Test - at the  significance level Hypotheses: Test Statistic: Fences: Reject H0 if rejection region J. McLellan

  12. Hypothesis Tests for Means … with knownvariance One-Sided Test - at the  significance level Hypotheses: Test Statistic: Fences: Reject H0 if rejection region J. McLellan

  13. Hypothesis Tests for Means When the variance is unknown, we estimate using the sample variance. Test statistic • use “standardization” using sample standard deviation Reference distribution - • becomes the Student’s t distribution • degrees of freedom are those of the sample variance • n-1 J. McLellan

  14. Hypothesis Tests for Means … withunknown variance Two-Sided Test - at the  significance level Hypotheses: Test Statistic: Fences: Reject H0 if rejection region J. McLellan

  15. Hypothesis Tests for Means … withunknown variance One-Sided Test - at the  significance level Hypotheses: Test Statistic: Fences: Reject H0 if rejection region J. McLellan

  16. Hypothesis Tests for Means … withunknown variance One-Sided Test - at the  significance level Hypotheses: Test Statistic: Fences: Reject H0 if rejection region J. McLellan

  17. Hypothesis Tests for Variances • Hypotheses • e.g., • Test Statistic • sincethen Test Statistic J. McLellan

  18. Hypothesis Tests for Variances Two-Sided Test - at the  significance level Hypotheses: Test Statistic: Fences: Reject H0 if Rejection region J. McLellan

  19. Hypothesis Tests for Variances One-Sided Test - at the  significance level Hypotheses: Test Statistic: Fences: Reject H0 if Rejection region J. McLellan

  20. Hypothesis Tests for Variances One-Sided Test - at the  significance level Hypotheses: Test Statistic: Fences: Reject H0 if Rejection region J. McLellan

  21. Outline • random samples • notion of a statistic • estimating the mean - sample average • assessing the impact of variation on estimates - sampling distribution • estimating variance - sample variance and standard deviation • making decisions - comparisons of means, variances using confidence intervals, hypothesis tests • comparisons between samples J. McLellan

  22. Comparisons Between Two Samples So far, we have tested means and variances against known values • can we compare estimates of means (or variances) between two samples? • Issue - uncertainty present in both quantities, and must be considered Common Question • do both samples come from the same underlying parent population? • e.g., compare populations before and after a specific treatment J. McLellan

  23. Preparing to Compare Samples Experimental issues • ensure that data is collected in a randomized order for each sample • ensure that there are no systematic effects - e.g., catalyst deactivation, changes in ambient conditions, cooling water heating up gradually • blocking - subject experimentation to same conditions - ensure quantities other than those of interest aren’t changing J. McLellan

  24. Comparison of Variances … is typically conducted prior to comparing means • recall that standardization required for hypothesis test (or confidence interval) for the mean requires use of the standard deviation  we should compare variances first before choosing appropriate mean comparison Approach • focus on ratio of variances • is this ratio = 1? • will be assessed using sample variances • what should we use for a reference distribution? J. McLellan

  25. Comparison of Variances Test Statistic • for use in both hypothesis tests and confidence intervals The quantity • n1 and n2 are the number of points in the samples used to compute and respectively F-distribution J. McLellan

  26. The F Distribution … arises from the ratio of two Chi-squared random variables, each divided by their degrees of freedom • sample variance is sum of squared Normal random variables • dividing by population variance standardizes them, and the expression becomes sum of standard Normal r.v.’s, i.e., Chi-squared J. McLellan

  27. Confidence Interval Approach Form probability statement for this test statistic: and rearrange: J. McLellan

  28. Confidence Interval Approach 100(1-)% Confidence Interval Approach: • compute confidence interval • determine whether “1” lies in the interval • if so - identical variances is a reasonable conjecture • if not - different variances J. McLellan

  29. Hypothesis Test Approach Typical approach • use a 1-sided test, with the test direction dictated by which variance is larger Test Statistic Under the null hypothesis, we are assuming that J. McLellan

  30. Hypothesis Tests for Variances One-Sided Test - at the  significance level For Hypotheses: Test Statistic: Fences: Reject H0 if J. McLellan

  31. Hypothesis Tests for Variances One-Sided Test - at the  significance level For Hypotheses: Test Statistic: Fences: Reject H0 if Why the reversal? J. McLellan

  32. Why the reversal? • Property of F-distribution • typically, we would compare against • Problem - • tables for upper tail areas of 1- are not always available • Solution - use the following fact for F-distributions • to use this, reverse the test ratio - previous slide J. McLellan

  33. Example Global warming problem from tutorial: • s1 - standard devn for March ‘99 is 3.2 C • s2 - standard devn for March ‘98 is 2.3 C • has the variance of temperature readings increased in 1999? • first, work with variances: • 1999 -- 10.2 C2 • 1998 -- 5.3 C2 • since a) we are interested in whether variance increased, and b) 1999 variance (10.2) is greater than 1998 variance (5.3), use the ratio Each is estimated using 31 data points J. McLellan

  34. Example Hypotheses: • observed value of ratio = 1.94 • “fence value” - test at the 5% significance level: • F31-1, 31-1, 0.05 = 1.84 • since observed value of test statistic exceeds fence value, reject the null hypothesis • variance has increased Note • if we had conducted the test at the 1% significance level (F=2.39), we would not have rejected the null hypothesis J. McLellan

  35. Example Now use confidence intervals to compare variances: • use a 95% confidence interval - outer tail area is 2.5% on each side • this is a 2-tailed interval, so we need J. McLellan

  36. Example Confidence interval: Conclusion • since 1 is contained in this interval, we conclude that the variances are the same • why does the conclusion differ from the hypothesis test? • 2-sided confidence interval vs. 1-sided hypothesis test • in confidence interval, 1 is close to the lower boundary J. McLellan

  37. Comparing Means The appropriate approach depends on: • whether variances are known • whether a test of sample variances indicates that variances can be considered to be equal • measurements coming from same population Assumption: data are Normally distributed The approach is similar, however the form depends on the conditions above • form test statistic • use reference distribution • re-arrange (confidence intervals) or compare to fence (hypothesis tests) J. McLellan

  38. Comparing Means Known Variances • if variances are known ( ), then • now we can standardize to obtain our test statistic Note - we are assuming that the samples used for the averages are independent. J. McLellan

  39. Comparing Means Known Variances Confidence Interval • form probability statement for test statistic as a Standard Normal random variable • re-arrange interval • procedure analogous to that for mean with known variance J. McLellan

  40. Comparing Means Known Variances Hypothesis Test Test Statistic Fences Reject H0 if Two-Sided Test J. McLellan

  41. Comparing Means Unknown Variance • appropriate choice depends on whether variances can be considered equal or are different • test using comparison of variances • if variances can be considered to be equal, assume that we are sampling with same population variance • pool variance estimate to obtain estimate with more degrees of freedom J. McLellan

  42. Pooling Variance • If variances can reasonably be considered to be the same, then we can assume that we are sampling from population with same variance • convert sample variances back to sums of squares, add them together, and divide by the combined number of degrees of freedom • can follow similar procedure for J. McLellan

  43. Pooling Variance • We have obtained the original sum of squares from each sample variance • combine to form overall sum of squares • degrees of freedom • pooled variance estimate J. McLellan

  44. Comparing Means Unknown Variance - “Equal Variances” Confidence Intervals • recall that • since variance is estimated, we use the t-distribution as a reference distribution • degrees of freedom = (n1-1) + (n2-1) • if 0 lies in this interval, means are not different J. McLellan

  45. Comparing Means Unknown Variance - “Equal Variances” Hypothesis Test Test Statistic Fences Reject H0 if J. McLellan

  46. Comparing Means Unknown Variance - “Unequal Variances” • test becomes an approximation • approach • test statistic • reference distribution - Student’s t distribution • estimate an “equivalent” number of degrees of freedom J. McLellan

  47. Comparing Means Unknown Variance - “Unequal Variances” • equivalent number of degrees of freedom • degrees of freedom  is largest integer less than or equal to J. McLellan

  48. Comparing Means Unknown Variance - “Unequal Variances” Confidence Intervals • similar to case of known variances, but using sample variances and t-distribution • degrees of freedom  is the effective number of degrees of freedom (from previous slide) • recall that • if 0 isn’t contained in interval, conclude that means differ J. McLellan

  49. Comparing Means Unknown Variance - “Unequal Variances” Hypothesis Test Test Statistic Fences Reject H0 if J. McLellan

  50. Paired Comparisons for Means Previous approach • 2 data sets obtained from 2 processes • compute average, sample variance for EACH data set • compare differences between sample averages Issue - • extraneous variation present because we have conducted one experimental program for process 1, and one distinct experimental program for process 2 • additional variation reduces sensitivity of tests • location of fences depends in part on extent of variation • can we conduct experiments in a paired manner so that they have as much variation in common as possible, and extraneous variation is eliminated? J. McLellan

More Related