1 / 41

Analysis of RT distributions with R

Analysis of RT distributions with R. Emil Ratko-Dehnert WS 2010/ 2011 Session 09 – 18.01.2011. Last time. Recap of contents so far (Chapter 1 + 2) Hierarchical Interference (Townsend‘s system) Functional forms of RVs Density function (TAFKA „distribution“)

rasha
Télécharger la présentation

Analysis of RT distributions with R

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of RT distributionswith R Emil Ratko-Dehnert WS 2010/ 2011 Session 09 – 18.01.2011

  2. Last time... • Recap of contents so far (Chapter 1 + 2) • Hierarchical Interference (Townsend‘s system) • Functional forms of RVs • Density function (TAFKA „distribution“) • Cumulative distribution function • Quantiles • Kolmogorov-Smirnof test

  3. II RT distributions in the field

  4. RTs in visual search

  5. Why analyze distribution? • Normality assumption almost always violated • Experimental manipulations might affect only parts of RT distribution • RT distributions can be used to constrainmodels e.g. of visual search (model fitting and testing)

  6. RT distributions • Typically unimodal and positively skewed • Can be characterized by e.g. the following distributions Ex-Wald Ex-Gauss Gamma Weibull

  7. Ex-Gauss Ex-Gauss distribution • Introduced by Burbeck and Luce (1982) • Is the convolution of a normal and an exponential distribution • Density: CDF of N(0,1)

  8. Ex-Gauss Convolution • ... is a modified version of the two original functions • It is the integral of the product of the two functions after one is reversed and shifted:

  9. Ex-Gauss Why popular? • Components of Ex-Gauss might correspond to different mental processes • Exponential Decision; Gaussian  Residual perceptual and response-generating processes • It is known to fit RT distributions very well (particularly hard search tasks) • One can look at parameter dynamics and infer on trade-offs

  10. Ex-Gauss Further reading • Overview: • Schwarz (2001) • Van Zandt (2002) • Palmer, et al. (2009) • Others: • McGill (1963) • Hohle (1965) • Ratcliff (1978, 1979) • Burbeck, Luce (1982) • Hockley (1984) • Luce (1986) • Spieler, et al. (1996) • McElree & Carrasco (1999) • Spieler, et al. (2000) • Wagenmakers, Brown (2007)

  11. Ex-Wald Ex-Wald distribution • Is the convolution of an exponential and a Wald distribution • Represents decision and response components as a diffusion process (Schwarz, 2001)

  12. Ex-Wald Ex-Wald density where

  13. Ex-Wald Diffusion Process Information space Respond „A“ A Mean drift ν z time Boundary separation drift rate ~N(ν,η) Evidence B Respond „B“

  14. Ex-Wald Qualitative Behaviour A2 smaller drift rate strict criterion larger drift rate A1 lax criterion 0 time Decision times for lax and strict criterion

  15. Ex-Wald Why popular? • Parameters can be interpreted psychologically • Very successful in modelling RTs for a number of cognitive and perceptual tasks • Are neurally plausible • Neuronal firing behaves like a diffusion process • Observed via single cell recordings

  16. Ex-Wald Further reading • Theoretical Papers: • Schwarz (2001, 2002) • Ratcliff (1978) • Heathcote (2004) • Palmer, et al. (2005) • Wolfe, et al. (2009) • Cognitive+perceptual tasks: • Palmer, Huk & Shadlen (2005) • Visual Search: • Reeves, Santhi & Decaro (2005) • Palmer, et al. (2009)

  17. Gamma Gamma distribution • Series of exponential distributions • α = average scale of processes • β = reflects approximate number of processes

  18. Gamma Why popular? • In fact, not too popular (publication-wise) • It has very decent fits, when assuming a model, that sees RT distributions as composed of three exponentially distributed processes (Initial feed-forward search  response selection)

  19. Gamma Further reading • Dolan, van der Maas, & Molenaar (2002): A framework for ML estimation of parameters of (mixtures of) common reaction time distri-butions given optional truncation or censoring (in Behavioral Research Methods, Instruments & Computers, 34(3), 304-323)

  20. Weibull Weibull Distribution • Like a series of races (bounded by 0 and ∞) the weibull distribution renders an asymptotic description of their minima • Johnsons (1994) version as 3 parameters: α, γ, ξ • For γ = 1  exp. distr., for γ ~ 3.6  normal distr. • Hence γmust lie somewhere in between

  21. Weibull Why popular? • Has been used in a variety of cognitive tasks • Excels in those, which can be modeled as a race among competing units (e.g. Memory search RTs) • Has decent functional fits

  22. Weibull Further reading • Logan (1992) • Johnson, et al. (1994) • Dolan, et al. (2002) • Chechile (2003) • Rouder, Lu, Speckman, Sun & Jiang (2005) • Cousineau, Goodman & Shiffrin (2002) • Palmer, Horowitz, Torrabla, Wolfe (2009)

  23. Comparing functional fits • Null hypothesis is fit of data with normal distribution (standard assumption for mean/var analysis) • All proposed distributions beat the gaussian, but not equally well 1) Ex-Gauss, 2) Ex-Wald, 3) Gamma, 4)Weibull • Also the first three have similar parameter trends • For further reading, see the simulation study by Palmer, Horowitz, Torrabla, Wolfe (2009)

  24. Excursion: Bootstrapping

  25. Basic idea • In statistics, bootstrapping is a method to assign accuracy to sample estimates • This is done by resampling with replacements from the original dataset • By that one can estimate properties of an estimator (such as its variance) • It assumes IID data

  26. Ex: Bootstrapping the sample mean • Original data: X = x1, x2, x3, ..., x10 • Sample mean: X = 1/10 * (x1 + x2 + x3 + ... + x10) • Resample data to obtain a bootstrap means: X1* = x2, x5, x10, x10, x2, x8, x3, x10, x6, x7  μ1* • Repeat this 100 times to get μ1*, ..., μ100* • Now one has an empirical bootstrap distribution of μ • From this one can derive e.g. the bootstrap CI of μ

  27. Pro bootstrapping • It is ... • simple and easy to implement • straightforward to derive SE and CI for complex estimtors of complex parameters of the distribution (percentile points, odds ratio, correlation coefficients) • an appropriate way to control and check the stability of the results

  28. Contra bootstrapping • Under some conditions it is asymptotically consistent, so it does not provide general finite-sample guarantees • It has a tendency to be overly optimistic (under-estimates real error) • Application not always possible because of IID restriction

  29. Situations to use bootstrapping • When theoretical distribution of a statistic is compli-cated or unknown • When the sample size is insufficient for straight-forward statistical inference • When power calculations have to be performed and a small pilot sample is availible • How many samples are to be computed? As much as your hardware allows for...

  30. And now to

  31. Creating own functions new.fun <-function(arg1, arg2, arg3){ x <-exp(arg1) y <-sin(arg2) z <-mean(arg2, arg3) result <- x + y + z result } A <- new.fun(12, 0.4, -4) „inputs“ Algorithm of function „output“ Usage of new.fun

More Related