BrainVoyager Discussion Forum
  Statistics
  Understanding cluster thresholding

Post New Topic  Post A Reply
profile | register | preferences | faq | search

UBBFriend: Email This Page to Someone! next newest topic | next oldest topic
Author Topic:   Understanding cluster thresholding
rob
Junior Member
posted 03 February 2012 04:45     Click Here to See the Profile for rob     Edit/Delete Message   Reply w/Quote
I read the documentation of the ClusterThresh plugin, but have some questions.

It seems that for the same mask, given an uncorrected (voxelwise) p, and a given smoothness, the cluster correction size to get a certain alpha (.05) should be fixed.

However, looks like the plugin gives different results for the same mask, same uncorrected p, and same alpha. I would guess that the reason is that the smoothness of the input map is different.

So is it the case that if you enter a contrast A - B, and it has many large clusters, the smoothness of the map will be high, and hence the cluster correction will be relatively large. But with the same parameters, if A - C contrast does not have much activation (few small clusters), smoothness will be low and hence he correction will be low.

Is this correct, and if not, how could different contrasts produce different correction levels? Thanks.

Fabri
Moderator
posted 03 February 2012 08:56     Click Here to See the Profile for Fabri   Click Here to Email Fabri     Edit/Delete Message   Reply w/Quote
Your understanding is fully correct.
If different contrasts produces different smoothness estimates, and you keep these, you will get different cluster size thresholds, even if the GLM and the mask is the same.
Your example is a good one. In the moment you take the difference between two conditions A-B you are somehow "high-pass" filtering in the space domain and this produces a much lower FWHM.
This said, you could also assume a prefixed FWHM for all your contrast based on e. g. the smoothing of the initial data or the GLM residual maps (both independent of the specific contrast).
One additional remark. If you apply a mask, this is applied "after" the smoothness estimaiton, and anyway, the mask is applied to the final map, where the out-of-mask values are set to zero. Therefore, if you repeat the plugin two times in a row with identical settings and provide mask, please make sure to reload the original unmasked map or recompute the GLM contrast. Of course, if you applied the mask "pre-hoc", i.e. when estimating the GLM, in this case there will be no differences.

rob
Junior Member
posted 04 February 2012 00:20     Click Here to See the Profile for rob     Edit/Delete Message   Reply w/Quote
Thanks very much for the reply. I have a bit of trouble with why this approach is valid.

It seems that the amount of correction (the probability of getting a cluster of a certain size by chance) should not depend on which conditions you are comparing within a study.

Let's say you are comparing looking at pictures with looking at a blank screen. This will produce huge activations in the brain. Because of large clusters, many nearby voxels will have similar values and hence will lead to high estimates of smoothness. This will lead to high correction. The problem is that the activation you are seeing is 'real' activation, i.e., due to differences between conditions. It is not inherent (due to scanner) or imposed (due to smoothing with a gaussian filter of some FWHM) smoothness (only a small part of it is). So the chance of getting a cluster of some size, if the data were pure noise, is not more for this map just because your conditions are very different.

If you were comparing pictures of houses with pictures of tools, the activation would be much smaller. However, the correction should be the same as for the other map, because the chance of getting a cluster by chance, if the data were noise with certain smoothness (due to scanner and filter) is the same. The smoothness applied to noise to simulate random data should not be less just because your conditions are very similar.

This is the reason why, I believe, both SPM and AFNI use _residuals_ to estimate the smoothness that is applied to the noise to find the correction. For a given subject, it is the same regardless of what you are comparing. The residuals left after all the variance due to conditions, motion etc. is accounted for is assumed to be noise. The smoothness of this noise is due to scanner+applied smoothness. This affects the size of the clusters you could get by chance (smoother noise = larger clusters by chance).

In other words, what we want to find is this: if you scanned the same subject at the same scanner, but all you got was noise because there was no effect of your stimuli, what are sizes of clusters that noise would have? This is the correction you want to apply to all your contrasts for that subject. It does not change with comparisons.

Does this make sense? (I am pretty sure this is what other softwares do). Thanks again.

All times are ET (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply
Hop to:

Contact Us | BrainVoyager.com

Copyright © Rainer Goebel 2001 - 2006

Powered by Infopop www.infopop.com © 2000
Ultimate Bulletin Board 5.47d