1 / 80

Statistics

Statistics. Confidence intervals Hypothesis testing Conditional probability, independence, covariance Correlation Coefficient and Linear Regression. n =20. n =100. n =1000. Last week: point estimates. What you really care about is the probability distribution that underlies your data.

prem
Télécharger la présentation

Statistics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistics • Confidence intervals • Hypothesis testing • Conditional probability, independence, covariance • Correlation Coefficient and Linear Regression

  2. n=20 n=100 n=1000 Last week: point estimates What you really care about is the probability distribution that underlies your data. But all you can do is sample a finite amount of data from the distribution. How do you estimate a parameter (e.g. mean, variance) of the underlying distribution based on your sampled data? Point estimation

  3. Mean: Variance: Point estimates 20 data points (y1, y2,… y20).

  4. How good is your point estimate? • Confidence intervals! • An interval that overlaps with the true parameter with a specified probability.

  5. Confidence Intervals • Goal: You have taken a number of measurements of a parameter. Specify a range that with 95% probability contains the true value of the parameter. • Example: You want to know the average length of the mouse sciatic nerve (), based on your data (x1, x2, … x25). • n = 25 animals • Point estimate for : How do you calculate the 95% confidence interval for ?

  6. -1.96 1.96 Background: The standard normal distribution (Z) The standard normal distribution has mean 0 and standard deviation 1. z P(-1.96 < Z < 1.96) = 0.95 • i.e., 95% of the time, a sample from the Z distribution is between -1.96 and +1.96

  7. Z distribution distribution  Calculating a confidence interval • You measured the sciatic nerves of 25 mice: x1, x2, .. x25 • Your point estimate of the sciatic nerve length is:

  8. Calculating a confidence interval Solve for :

  9. Upper bound Lower bound 1.1 1.5 1.9  Calculating a confidence interval

  10. 2.0 1.5  1.0 How to interpret a confidence interval (CI) • The center of our 95% CI is the point estimate of the mean (1.5cm) • The probability that this CI covers the true  is 95%

  11. -1.96 1.96 -1 1 SEM CIs are related to the standard error of the mean (SEM) • For a 95% CI, • For a 68% CI,

  12. Nerve length (cm) Mouse Shrew CIs are related to thestandard error of the mean (SEM) 68% CIs

  13. -Z/2 Z/2 100(1-)% Confidence interval For a 100(1- )% CI,

  14. So far, we’ve talked about using the Z distribution to calculate confidence intervals • For a 95% CI of the mean, the CI (based on the Z distribution) is: • Notice that this relies on knowing the variance of the underlying distribution. • We can use the estimated variance if the sample size is large enough (>30). • Otherwise, use the T distribution rather than Z.

  15. T distributions for confidence interval • We can use the T distribution rather than the Z distribution to calculate confidence intervals based on our estimate of . • For large sample size, T and Z distribution are identical. • This is because for large sample size, our estimate of the variance is very close to the true variance.

  16. Z Z vs T distribution T depends on degrees of freedom () = sample size -1

  17. Z T, =3 Z vs T distribution T depends on degrees of freedom () = sample size -1

  18. Z T, =5 Z vs T distribution T depends on degrees of freedom () = sample size -1

  19. Z T, =24 Z vs T distribution T depends on degrees of freedom () = sample size -1

  20. Z distribution -z/2 z/2 Probability density z Calculating confidence intervals when you know 

  21. T distribution, =30 -t/2,30 t/2,30 Probability density t Calculating confidence intervals when you don’t know 

  22. Introduction to Hypothesis Testing • You wonder if a coin is fair. • You could calculate a point estimate for the probability of heads, and calculate a confidence interval on that probability. • If the confidence interval does not overlap with 0.5, you may be suspicious that the coin isn’t fair. • But how do you know if you can reject the hypothesis that the coin is fair?

  23. Introduction to Hypothesis Testing • The null hypothesis is that the coin is fair (p=.5). • Binomial distribution with n=12 and p=.5. • The alternative hypothesis is that the coin is not fair (p.5) • How many heads would we need to observe in 12 trials to reject the null hypothesis? • The convention is to select a cutoff such that the probability of incorrectly rejecting the null hypothesis is less than .05 (=.05) .

  24. probability Rejection region for =0.05 k Introduction to Hypothesis Testing Probability of k “heads” out of 12 flips, if the coin is fair (p=.5). BINOMIAL DISTRIBUTION

  25. Hypothesis Testing: The Steps 1. State the hypotheses: null & alternative 2. Identify the test statistic under the null hypothesis 3. Determine the rejection region of the test statistic for the selected significance level  4. Calculate the value of the test statistic for the data set 5. Determine whether or not the test statistic falls within the rejection region

  26. Example • You want to know if , the average change in the size of the dendritic tree after some manipulation, is different from 0. • i.e. Does your manipulation change the average size of the dendritic tree? • Sample size: n = 25 • Measured average: • Standard deviation:  = 0.4

  27. 1. State the hypotheses • Null Hypothesis-- H0:=0 • Alternative Hypothesis-- HA:≠0 • The null hypothesis is always the more conservative (initially favored) claim.

  28. 2. Identify the test statistic under the null hypothesis • The test statistic is a function of the sample data on which the decision (whether or not to reject null) is based. • The test statistic you use depends on the data set and your assumptions. • For example, if we use the Z statistic, we’re assuming a normal population with a known standard deviation.

  29. -z/2 z/2 1-  /2 /2 3. Determine the rejection region of the test statistic for the selected significance level   is the probability of incorrectly rejecting the null hypothesis Probability of z, assuming the null hypothesis is true Rejection region: z > z/2 z < -z/2 If =.05, z/2=1.96

  30. 4. Calculate the value of the test statistic for the data set

  31. Probability of z, assuming the null hypothesis is true -z/2 z/2 1-  /2 /2 5. Determine whether or not the test statistic falls within the rejection region z=6.25 Rejection region: z > z/2 z < -z/2 If =.05, z/2=1.96 Therefore, the null hypothesis (=0) can be rejected at the =.05 significance level.

  32. What’s a “P-value”? • It tells you the probability of rejecting the null hypothesis when it is in fact true. • aka Probability(Type I error) • The P-value is the smallest level of significance () at which the null hypothesis would be rejected.

  33. Rejection regions for One-tailed vs Two-tailed tests One-tailed: Null -- H0:=0 Alternative -- HA:>0 Two- tailed: Null -- H0:=0 Alternative -- HA:≠0

  34. Hypothesis testing:what can/can’t you conclude? • ALL you can conclude is whether or not, at a certain significance level (), you can reject the null hypothesis. • The significance level () represents the greatest probability of incorrectly rejecting the null hypothesis that you will tolerate. • Common misconceptions about hypothesis testing: • It is possible to accept the null hypothesis. • There is a 5% chance that the null hypothesis is true when =.05.

  35. You can NEVER accept the null hypothesis • The question you are asking with a hypothesis test is whether or not you can reject the null hypothesis. • You cannot prove that a coin is fair • But you can reject the hypothesis that the coin is fair (at a certain significance level) • The more data you have, the more likely you are to be able to reject the null hypothesis (if it is false).

  36. Common test statistics

  37. Parametric vs Non-parametric Tests • T-test, Z-test assume that the estimated mean is normally distributed. • Otherwise, resort to nonparametric tests. • e.g.: Mann Whitney test; Wilcoxon Signed Rank test • These tests have less POWER, i.e. greater probability of failing to reject a false null hypothesis (type II error).

  38. What if you have multiple hypotheses about your data? For example: 1) did the dendrite change in size? 2) did the axon change in size? 3) are there more presynaptic terminals? 4) are there more postsynaptic terminals?

  39. What if you just tested each hypothesis, one by one? • Set .05 for each hypothesis. • If you have 4 tests, the effective becomes about 4 times bigger. • You actually have an 18% chance to incorrectly reject at least one of the null hypotheses when they’re all actually true! • This can make  values in the literature hard to interpret …

  40. Bonferroni correction for multiple hypothesis testing Extremely simple, but overly conservative, correction. Divide the desired  level for the whole test by the number of tests to get the corrected level of  for each test. bonf = /n

  41. a b c Medication A Medication B Medication C ANOVA • Tests for effect of 1 or more categories (or “factors”) on the mean of the data. • Each category may include 2 or more conditions (or “groups”). • 1-factor ANOVA tests whether all group means are the same. • Null hypothesis: a=b=c • Alternative: At least one pair is different.

  42. How is it done? And why call it ANOVA? • In ANOVA, the total variance in the data is partitioned into two components: • across-groups: variance of the data across groups • within-groups: variance of the data within groups • If the across group variance is sufficiently larger than the within group variance, you reject the null hypothesis that all means are equal. • Relies on the F-test, which tests for differences in the ratio of variances.

  43. 2-factor ANOVA • For 2 factor ANOVA, there are 2 different factors that you are interested in. • For example: How does rodent lifespan depend on exercise and medication? • Exercise Does exercise effect lifespan? Does medication effect lifespan? Is there an interaction between medication and lifespan? • Medication

  44. NO INTERACTION Exercise No Exercise lifespan A B C Medication A B C Medication What do we mean by “interaction”? Interaction means that you cannot sum the influence of exercise and medication to predict lifespan. i.e., Medication affects lifespan differently, depending on the exercise level. INTERACTION

  45. What ANOVA does/doesn’t do • Tells us whether we can reject the null hypothesis that all group means are equal • …but even if we reject the null, we still don’t know which pairs of means are different from one another. • Usually, people perform multiple hypothesis testing on the factor that ANOVA identifies as “significant”.

  46. break

  47. f(x) Probability density x Joint Probability Distributions • So far, we have discussed the probability density function of a SINGLE random variable, X. • What if you have more than one variable?

  48. f(x,y) y x Joint Probability Distributions What if you have 2 variables, X and Y?

  49. Conditional probabilities What’s the probability distribution of X, given you already know the value of Y? P(X=x|Y=y) P(draw a spade) = 1/4 P(draw a spade|black card) = 1/2

  50. Conditional Probability

More Related