First-Level Analysis Using afniproc.py
Step 1: Download data from XNAT and automatically convert to BIDS format using xnat2bids
Running xnat2bids on the command line in Oscar will download the data we need, convert it to BIDS format, and run the BIDS validator to check for any issues. We will be using the xnat-tools Oscar utility script explained here. The Demodat2 dataset has 3 subjects with 2 sessions each. We will be exporting this entire project.
Create a configuration file
The configuration .toml file contains information that xnat-tools needs to download the correct data and put it where we want. Name this file x2b_demodat2_config.toml and place wherever you'd like. Paste the following into your .toml file, and change mail-user
to your email address. The script will default to placing the downloaded and BIDS-converted data in a folder called "bids-export" in your home directory; if you'd like to change this location, add a new line at the bottom with your desired path, i.e.: bids_root="/oscar/home/<yourusername>/data/<projectname>/xnat-export"
. Make sure to save this .toml file when you are done editing.
# Configuring arguments here will override default parameters.
[slurm-args]
mail-user = "[email protected]"
mail-type = "ALL"
output = "/oscar/home/example-user/logs/%x-%J.txt"
[xnat2bids-args]
bids_root = "/oscar/home/<example-username>/data/Demodat2/xnat-exports"
project="BNC_DEMODAT2"
verbose=2
# Skip scanner-derived multi-planar reconstructions & non-distortion-corrected images
# These are used for MRS voxel placement on the scanner and will cause xnat2bids to fail.
skipseq=["anat-t1w_acq-memprage_MPR_Cor","anat-t1w_acq-memprage_MPR_Tra","anat-t1w_acq-memprage_MPR_Tra_ND","anat-t1w_acq-memprage RMS_ND","anat-t1w_acq-memprage_MPR_Cor_ND"]
Run the xnat2bids command
To run the xnat-tools export and BIDS conversion, ensure that you are working in a terminal in the same directory where you stored this .toml file (and if not, then give the full path to the .toml file in your command). On the command line, type:
module load anaconda
python /oscar/data/bnc/shared/scripts/run_xnat2bids.py --config x2b_demodat2_config.toml
Enter your XNAT username and password when prompted.
If you entered your email address, you should receive an email when each of the xnat2bids jobs begin, and another when they finish.
This will create a sourcedata folder for each subject within $bids_root/bnc/study-demodat2/xnat-export
and a BIDS-compatible data directory within $bids_root/bnc/study-demodat2/bids/
.
Step 2: Convert psychopy timing files to be used by AFNI
The aim of this step is to make our behavioral data BIDS compatible and facilitate future data sharing. To do this, we first need to create events.tsv files that correspond to each of our functional runs and contain information about each stimulus event of interest (onset time, condition, etc.). Then, in order to use afni for a regression, those tsv files must be converted into afni 1D files and saved in the bids derivatives directory.
Download the psychopy behavioral timing files (csv)
First, download the participant's data files (in our case, created by PsychoPy) and manually place them in a directory named 'beh', in the sourcedata subfolder of your BIDS directory. For each individual subject and session, the full path should be: $bidsdir/sourcedata/sub-xxx/ses-xx/beh
.
Repeat this step for each subject and session, placing the files in their respective beh
folders. Files are zipped per subject (unzip the file to view ses-01 and ses-02 separately).
Convert the timing files using a batch script
Batch scripting improves the efficiency of data processing because it allows you to automatically launch the same job on all individual subjects/sessions, rather than running the script/command one by one. It takes advantage of the many resources available on our HPC (CPU, cores, etc) and runs the same script on all your subjects in parallel (rather than sequentially).
In this tutorial, we will include the data from both runs of each session (4 runs total per subject) in our afniproc.py command. To do so, our example script preprocess_behavior.py
will 1) convert and move the csv files output from psychopy to BIDS-organized tsv files, and then 2) convert and move those tsv files to 1D files that will be used by afni in the bids derivatives folder.
Here you can download our example python processing script (preprocess_behavior.py
), the batch script that will be used to run it (run_preprocess_behavior.py
), and the text file containing subject IDs (subjects.txt
). Save these so they can be accessed on your oscar account. Open the batch script run_preprocess_behavior.sh and fill in your email and the path to your bids directory.
Run the batch script
To run the batch script, which will then launch multiple iterations of the preprocess_behavior.py script (one job/launch per subject), navigate to the directory where you saved these 3 files. On the command line, type sbatch run_preprocess_behavior.sh
. Since you filled in your email, you should receive a message when each of the 3 jobs have launched.
If you are unable to run this script for any reason, you can download the events.tsv output files here, and manually place them in each respective directory, i.e.: $bidsdir/sub-xxx/ses-xx/func/
. Similarly, you can download the .1D files and manually place them in a new folder called $bidsdir/derivatives/afni/sub-xxx/dualsession/stimtimes/
.
Step 3: Prepare fMRI data for preprocessing by warping to standard space
It is possible to quickly warp your data into standard space within the afniproc.py script, but this does not provide the most accurate warping, since it is done via a linear transformation. To perform a nonlinear transformation into standard space, there is a separate afni command called sswarper2 (the successor of @SSwarper). This command takes roughly an hour and half to run on Oscar, and is used on each individual subject's T1 anatomical file. You only need to warp one of the T1s per subject- in the next step, all functional data from session 1 and 2 will be aligned to it.
Copy and paste this script into a file called run_SSW.sh
. Change the example email to your own, and edit the path to your bids directory. Ensure that it is saved in the same location as your subjects file (subjects.txt
). To launch this script on all 3 subjects sequentially, navigate to the directory on the command line and type sbatch run_SSW.sh
.
#!/bin/bash
#SBATCH -N 1
#SBATCH -c 8
#SBATCH --mem=10G
#SBATCH --time 3:00:00
#SBATCH -J SSW
#SBATCH --output=logs/SSW-%A_%a.out
#SBATCH --mail-user [email protected]
#SBATCH --mail-type ALL
#SBATCH --array=1-3
# To run this, type on the terminal: sbatch run_SSW.sh
bidsdir=/path/to/bids # <-- Edit with your bids path
subID=$(sed -n "${SLURM_ARRAY_TASK_ID}p" subjects.txt)
echo "Running subject: $subID"
mkdir $bidsdir/derivatives/afni/$subID/dualsession
outdir=$bidsdir/derivatives/afni/$subID/dualsession
cd $outdir
# Run sswarper2 on ses-01 data for each subject
sswarper2 \
-input $bidsdir/$subID/ses-01/anat/${subID}_ses-01_acq-memprageRMS_T1w.nii.gz \
-base MNI152_2009_template_SSW.nii.gz \
-subid ${subID} -odir ${outdir}

Step 4: Use afni_proc.py to create a preprocessing stream and run the general linear model per subject
This basic example of a univariate analysis with AFNI is based on the example 6b for afni_proc.py.
Run the afniproc.py batch script
This afniproc_dualsession.sh
script will iterate over your list of subjects and launch afniproc.py over them individually. Afniproc.py is a metascript, meaning that it will create your actual preprocessing script proc.sub-xxx_dualsession
for each subject. Since we included the -execute
flag at the bottom, proc.sub-xxx_dualsession
will be automatically launched when you run afniproc.py. Looking at the proc.sub-xxx_dualsession
script is the best way to gain a deeper understanding of each of AFNI's processing steps.
Copy the text in the box below into a file editor on Oscar. Change your email in the beginning section, and change the value of the bidsdir
variable to your own location (path should end in /bids
). Save this script as a file called afniproc_dualsession.sh
, and then execute it on the command line with sbatch demodat2_afniproc.sh
. It will launch as a batch script, similar to how we ran run_preprocess_behavior.sh
. You will receive an email when the job has completed.
#!/bin/bash
#SBATCH -N 1
#SBATCH -c 8
#SBATCH --mem=10G
#SBATCH --time 4:00:00
#SBATCH -J afniproc_dualsession
#SBATCH --output=logs/afniproc_dualsession-%A_%a.out
#SBATCH --mail-user [email protected]
#SBATCH --mail-type ALL
#SBATCH --array=1-3
# To run this, type on the terminal: sbatch afniproc_dualsession.sh
bidsdir=/path/to/bids # <---- FILL THIS IN
subID=$(sed -n "${SLURM_ARRAY_TASK_ID}p" subjects.txt)
echo "Running subject: $subID"
mkdir $bidsdir/derivatives/afni/$subID/dualsession
cd $bidsdir/derivatives/afni/$subID/dualsession
# Run afniproc
afni_proc.py \
-subj_id ${subID}_dualsession \
-out_dir $outdir/$subID.dualsession.results \
-copy_anat $outdir/anatSS.${subID}.nii \
-anat_has_skull no \
-dsets $bidsdir/$subID/ses-01/func/*checks*nii* \
$bidsdir/$subID/ses-02/func/*checks*nii* \
-blocks tshift align tlrc volreg blur mask scale regress \
-tcat_remove_first_trs 0 \
-align_opts_aea -cost lpc+ZZ -giant_move -check_flip \
-tlrc_base MNI152_2009_template_SSW.nii.gz \
-tlrc_NL_warp \
-tlrc_NL_warped_dsets $outdir/anatQQ.$subID.nii \
$outdir/anatQQ.$subID.aff12.1D \
$outdir/anatQQ.${subID}_WARP.nii \
-volreg_align_to MIN_OUTLIER \
-volreg_align_e2a \
-volreg_tlrc_warp \
-mask_epi_anat yes \
-blur_size 4.0 \
-regress_stim_times \
$bidsdir/derivatives/afni/$subID/dualsession/stimtimes/${subID}_checks_left_dualsession_stimtimes.1D \
$bidsdir/derivatives/afni/$subID/dualsession/stimtimes/${subID}_checks_right_dualsession_stimtimes.1D \
$bidsdir/derivatives/afni/$subID/dualsession/stimtimes/${subID}_keypress_left_dualsession_stimtimes.1D \
$bidsdir/derivatives/afni/$subID/dualsession/stimtimes/${subID}_keypress_right_dualsession_stimtimes.1D \
-regress_stim_labels leftchx rightchx leftpress rightpress \
-regress_basis_multi 'BLOCK(12,1)' 'BLOCK(12,1)' 'GAM' 'GAM' \
-regress_opts_3dD -jobs 2 \
-gltsym 'SYM: leftchx -rightchx' -glt_label 1 left_vs_right_chx \
-gltsym 'SYM: leftpress -rightpress' -glt_label 2 left_vs_right_press \
-regress_motion_per_run \
-regress_censor_motion 0.3 \
-regress_censor_outliers 0.05 \
-regress_reml_exec \
-regress_compute_fitts \
-regress_make_ideal_sum sum_ideal.1D \
-regress_est_blur_epits \
-regress_est_blur_errts \
-regress_run_clustsim yes \
-html_review_style pythonic \
-execute
Now that each subject's data has been preprocessed (both sessions included), we are ready to move on to the second-level/group analysis!
Last updated
Was this helpful?