1 / 25

Measuring Functional Integration: Connectivity Analyses

Measuring Functional Integration: Connectivity Analyses. Roadmap to connectivity. Functional architecture of the brain?. Functional segregation : Univariate analyses of regionally specific effects. Functional integration : Multivariate analyses of regional interactions.

jhooper
Télécharger la présentation

Measuring Functional Integration: Connectivity Analyses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring Functional Integration:Connectivity Analyses

  2. Roadmap to connectivity Functional architecture of the brain? Functional segregation: Univariate analyses of regionally specific effects Functional integration: Multivariate analyses of regional interactions Functional Connectivity: “The temporal correlation between spatially remote neurophysiological events” An operational/observational definition Many possible reasons and mechanisms! Effective Connectivity: “The influence one neuronal system exerts upon others” A mechanistic/model-based definition Context and mechanism of specific connections?

  3. Overview • Functional Connectivity: • SVD/PCA • Eigenimage analysis • Problems of Eigenimage analysis • Possible solutions: Partial least squares, ManCova + Canonical Variate Analysis • Effective connectivity: • Basic concepts – linear vs nonlinear models • Regression-based models: PPI, SEM • Modulatory influences at neuronal vs BOLD level • Limitations of the presented methods and outlook

  4. Singular Value Decomposition • Aim: • To extract the structure inherent in the covariance of a series of repeated measurements (e.g., several scans in multiple voxels) • SVD is identical to Principal Component Analysis! • Neuroimaging: • Which spatio-temporal patterns of activity explain most of the (co)variance in a timeseries? • Procedure: Decomposition of a Matrix Ynxm into: • Vmxm : “Eigenimages”  SPACE: expression of m patterns in m voxels • Unxn : “Eigenvariates” TIME: expression of n patterns in n scans • Snxm : “Singular Values” IMPACT: variance the patterns account for “Eigenvalues” (squared) • Only components with Eigenvalues > 1 need to be considered! • The decomposition (and possible reconstruction) of Y: [U,S,V] = SVD(Y) Y = USVT

  5. Eigenimages A time-series of 1D images 128 scans of 40 “voxels” Eigenvariates: Expression of 1st 3 Eigenimages Eigenvalues and Spatial Modes

  6. SVD: Data Reconstruction Y = USVT = s1U1V1T + s2U2V2T + ... (p < n!) U : “Eigenvariates” Expression of p patterns in n scans S : “Singular Values” or “Eigenvalues” (squared) Variance the p patterns account for V : “Eigenimages” or “Spatial Modes” Expression of p patterns in m voxels Data reduction  Components explain less and less variance Only components with Eigenvalue >1 need to be included! V1 V2 voxels APPROX. OF Y by P1 APPROX. OF Y by P2 time s1 + s2 + … U1 U2 = Y (DATA)

  7. Eigenimages A time-series of 1D images 128 scans of 40 “voxels” Eigenvariates: Expression of 1st 3 Eigenimages Eigenvalues and Spatial Modes The time-series “reconstructed”

  8. An example: PET of word generation Word generation and repetition PET Data adjusted for effects of interest and smoothed with FWHM 16 Two “Modes” extracted (Eigenvalue >1) First Mode accounts for 64% of Variance Spatial loadings (Eigenimages): Establish anatomical interpretation (Broca, ACC, …) Temporal loadings (Eigenvariates): establish functional interpretation (covariation with experimental cycle)

  9. Problems of Eigenimage analysis I • Data-driven method: • Covariation of patterns with experimental conditions not always dominant  Functional interpretation not always possible

  10. Partial Least Squares • Data-driven method: • Covariation of patterns with experimental conditions not always dominant  Functional interpretation not always possible  Partial least squares: • Apply SVD to the covariance of two independent matrices with a similar temporal structure Mfmri : timeseries, n scans x m voxels Mdesign : design matrix, n scans x p conditions [U,S,V] = svd(MfmriT Mdesign) MfmriT Mdesign = USVT UTMfmriT MdesignV = S • PLS identifies pairs of Eigenimages that show maximum covariance • Also applicable to timeseries from different regions/hemispheres!

  11. Problems of Eigenimage analysis II • Data-driven method: • Covariation of patterns with experimental conditions not always dominant  Functional interpretation not always possible • No statistical inference: • When is a pattern significantly expressed in relation to noise?

  12. ManCova / Canonical Variates • No statistical inference: • When is a pattern significantly expressed in relation to noise?  ManCova / Canonical Variates Analysis: • Multivariate Combination of SPM and Eigenimage analysis • Considers expression of eigenimages Y = US as data (m x p) • Multivariate inference about interactions of voxels (variables), not aboutone voxel

  13. ManCova / Canonical Variates • No statistical inference: • When is a pattern significantly expressed in relation to noise?  ManCova / Canonical Variates Analysis: Procedure: • Expression of eigenimages Y = US as data (m x p) • ManCova is used to test effects of interest on Y (smoothness in Y) St = SStreat Sr=SSres_model_with_treat So = SSres_model_without_treat λ = Sr / So • CVA returns a set of canonical variates CVs: • linear combinations of eigenimages • explain most of the variance due to effects of interest in relation to error • determined to maximise St / Sr • generalised Eigenimage solution: St / Sr c = c e • c: p x c matrix of Canonical Variates; e = c x c matrix of their Eigenvalues • CVs can be projected into voxel space by C = Vc (“Canonical Images”)

  14. Eigenimages vs Canonical Images Eigenimage Canonical Image Captures the pattern that explains Captures the pattern that explains most overall variance most variance in relation to error

  15. Problems of Eigenimage analysis: The End! • Data-driven method: • Covariation of patterns with experimental conditions not always dominant  Functional interpretation not always possible • No statistical model: • When is a pattern truly expressed in relation to noise? • Patterns need to be orthogonal • Biologically implausible (interactions among systems and nonlinearities) • “Only” functional connectivity: • What drives the pattern? • Uni- or polysynaptic connections? Common input from ascending reticular systems? Cortico-thalamico-cortical connections? Or even nuisance effects?

  16. Overview • Functional Connectivity: • SVD/PCA • Eigenimage analysis • Problems of Eigenimage analysis • Possible solutions: Partial least squares, ManCova + Canonical Variate Analysis • Effective connectivity: • Basic concepts – linear vs nonlinear models • Regression-based models: PPI, SEM • Modulatory influences at neuronal vs BOLD level • Limitations of the presented methods and a possible solution

  17. Effective connectivity • the influence that one neural system exerts over another - how is this affected by experimental manipulations • considers the brain as a physical interconnected system • requires - an anatomical model of which regions are connected and - a mathematical model of how the different regions interact

  18. A mathematical model – but which one? example linear time-invariant system Input u1 Input u2 c11 c22 a21 region x1 region x2 a22 a11 a12 A – intrinsic connectivity C – inputs . e.g. state of region x1: x1 = a11x1 + a21x2 + c11u1 . x = Ax + Cu • Linear behaviour – inputs cannot influence intrinsic connection strengths

  19. A mathematical model – a better one? Bilinear effects to approximate non-linear behavior Input u1 Input u2 A – intrinsic connectivity B – induced connectivity C – driving inputs b212 c11 c22 region x1 region x2 a21 a22 a11 a12 . state of region x1: x1 = a11x1 + a21x2 + b212u2x2 + c11u1 . x = Ax + Bxu + Cu • Bilinear term – product of two variables (regional activity and input)

  20. Linear regression models of connectivity PPI study of attention to motion Attention • Hypothesis: • attentional modulation of V1 – V5 connections V1 V5 • 4 experimental conditions: • F - fixation point only • A - motion stimuli with attention (detect changes) • N - motion stimuli without attention • S - static display

  21. Linear regression models of connectivity PPI study of attention to motion Bilinear term as regressor in design matrix • H0: betaPPI = 0 • Corresponds to a test for a difference in regression slopes between the attention and no-attention condition

  22. Linear regression models of connectivity Structural equation modelling (SEM) z2 z1 b12 y2 y1 b13 b32 y3 z3 0 b12b13 y1 y2 y3 = y1 y2 y3 0 0 0 + z1 z2 z3 0 b320 y – time series b - path coefficients z – residuals (independent) • Minimises difference between observed and implied covariance structure • Limits on number of connections (only paths of interest) • No designed input - but modulatory effects can enter by including bilinear terms as in PPI

  23. Linear regression models of connectivity Inference in SEM – comparing nested models • Different models are compared that either include or exclude a specific connection of interest • Goodness of fit compared between full and reduced model: - Chi2 – statistics • Example from attention to motion study: modulatory influence of PFC on V5 – PPC connections H0: b35 = 0

  24. Modulatory interactions at BOLD versus neuronal level • HRF acts as low-pass filter • especially important in high frequency (event-related) designs • Facit: • either blocked designs or • hemodynamic deconvolution of BOLD time series – incorporated in SPM2 Gitelman et al. 2003

  25. Outlook – DCM developed to account for different shortcomings of the presented methods • state-space model - experimentally designed effects drive the system or have modulatory effects • should allow to test more complex models than SEM • incorporates a forward model of neuronal -> BOLD activity More next week !

More Related