This tutorial explains how to run a searchlight-based Multivariate Pattern Analysis (MVPA) using MVPA_Searchlight.R. The script performs a local classification or regression analysis on fMRI data by iterating over each voxel (or node for surface data) and extracting information from a surrounding neighborhood.
The script handles both volumetric (NIfTI) and surface-based neuroimaging data. It leverages parallel processing across multiple cores for faster computation. You can choose from various classifiers and regressors like rf
, sda_notune
, and corsim
. The analysis generates reproducible outputs including config files and metric maps. Cross-validation options include both blocked and stratified approaches. Data can be optionally normalized through centering and scaling. The script supports different feature selection methods and works seamlessly with both volumetric and surface-based analyses.
If you have:
train_data.nii
train_design.txt
mask.nii
You can run the script from the command line:
MVPA_Searchlight.R --radius=6 \ --train_design=train_design.txt \ --train_data=train_data.nii \ --mask=mask.nii \ --model=sda_notune \ --label_column=condition \ --ncores=4 \ --output=my_searchlight_output
The script supports two primary data modes:
--data_mode=image
)--data_mode=surface
The script supports various classification and regression models:
corclass
: Correlation-based classifier with template matchingsda_notune
: Simple Shrinkage Discriminant Analysis without tuningsda_boot
: SDA with bootstrap resamplingglmnet_opt
: Elastic net with EPSGO parameter optimizationsparse_sda
: SDA with sparsity constraintssda_ranking
: SDA with automatic feature rankingmgsda
: Multi-Group Sparse Discriminant Analysislda_thomaz
: Modified LDA for high-dimensional datahdrda
: High-Dimensional Regularized Discriminant Analysisregister_mvpa_model()
The script supports multiple cross-validation strategies:
--block_column=session
Uses a blocking variable (e.g., session) for cross-validation splits.
Default when no block column is specified. Uses random splits.
Specify in the configuration file:
cross_validation: name: "twofold" nreps: 10
In addition to the standard options above, several advanced cross-validation strategies are available:
Specify the desired method in your configuration file by setting the name
field under cross_validation
. For example, to use bootstrap blocked cross-validation:
cross_validation: name: "bootstrap" # Options: "twofold", "bootstrap", "sequential", "custom", "kfold" nreps: 10
Choose the method that best aligns with your data structure and experimental design.
Enable feature selection with the --feature_selector
parameter:
feature_selector: method: "anova" # or "correlation", "t-test", etc. cutoff_type: "percentile" cutoff_value: 0.1
label_column
The label column is critical as it specifies the target variable for classification or regression.
"Face"
vs. "House"
).Example Design File (train_design.txt
):
trial condition subject session 1 Face S01 1 2 House S01 1 3 Face S01 1 4 House S01 1 5 Face S01 2
Instead of specifying all options on the command line, you can use a YAML or R script configuration file.
Example YAML Config File (config.yaml
):
# Data Sources train_design: "train_design.txt" test_design: "test_design.txt" train_data: "train_data.nii" test_data: "test_data.nii" mask: "mask.nii" # Analysis Parameters model: "rf" # Random Forest classifier data_mode: "image" # or "surface" ncores: 4 radius: 6 label_column: "condition" block_column: "session" # Output Options output: "searchlight_results" normalize_samples: TRUE class_metrics: TRUE # Advanced Options feature_selector: method: "anova" cutoff_type: "percentile" cutoff_value: 0.1 cross_validation: name: "twofold" nreps: 10 # Optional Subsetting train_subset: "subject == 'S01'" test_subset: "subject == 'S02'"
Running with a Config File:
Rscript MVPA_Searchlight.R --config=config.yaml
After running the script, the output directory (searchlight_results/
) contains:
accuracy.nii
: Overall classification accuracy mapauc.nii
: Area Under Curve (AUC) performance mapFor multiclass problems with class_metrics: TRUE
:
auc_class1.nii
, auc_class2.nii
, etc.: Per-class AUC mapsProbability Maps: When available
prob_observed.nii
: Probabilities for observed classesprob_predicted.nii
: Probabilities for predicted classes
Configuration
config.yaml
: Complete record of analysis parameters for reproducibilityExample directory structure:
searchlight_results/ ├── accuracy.nii # Overall classification accuracy ├── auc.nii # Mean AUC across classes ├── auc_class1.nii # AUC for class 1 (if class_metrics: TRUE) ├── auc_class2.nii # AUC for class 2 (if class_metrics: TRUE) ├── prob_observed.nii # Probabilities for observed classes ├── prob_predicted.nii # Probabilities for predicted classes └── config.yaml # Analysis configuration
The exact files will depend on:
- Whether it's a binary or multiclass classification
- If class_metrics: TRUE
is set
- The type of analysis (classification vs regression)
- The model type used
For regression analyses, you'll see different metrics:
- r2.nii
: R-squared values
- rmse.nii
: Root Mean Square Error
- spearcor.nii
: Spearman correlation
--normalize_samples=TRUE
for better model performance--ncores
for faster processing on multi-core systems--radius
based on your spatial resolution and hypothesis--type=randomized
for faster approximate searchlightsoptions(future.globals.maxSize)
MVPA_Searchlight.R provides a flexible searchlight-based MVPA tool that works with both volumetric and surface-based data. It includes cross-validation, feature selection, and extensive configuration through command line or config files. The tool generates comprehensive metrics and reproducible outputs to help you analyze your neuroimaging data.
Next Steps:
- Try different models (--model=rf
, --model=sda_notune
)
- Experiment with feature selection methods
- Explore surface-based MVPA with --data_mode=surface
- Use cross-validation strategies appropriate for your design
- Optimize performance with parallel processing
Happy searchlighting!
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.