%matplotlib inline

Nilearn GLM: statistical analyses of MRI in Python#

Nilearn’s GLM/stats module allows fast and easy MRI statistical analysis.

It leverages Nibabel and other Python libraries from the Python scientific stack like Scipy, Numpy and Pandas.

In this tutorial, we’re going to explore nilearn's GLM functionality by analyzing 1) a single subject single run and 2) three subject group level example using a General Linear Model (GLM). We’re gonna use the same example dataset (ds000114) as from the nibabel and nilearn tutorials. As this is a multi run multi task dataset, we’ve to decide on a run and a task we want to analyze. Let’s go with ses-test and task-fingerfootlips, starting with a single subject sub-01.

You can download here. It contains fMRI data from a few participants. After the download finished, please make sure to unzip the file and place the resulting directory somewhere accessible, e.g. your Desktop.

Individual level analysis#

Setting and inspecting the data#

At first, we have to set and indicate the data we want to analyze. As stated above, we’re going to use the anatomical image and the preprocessed functional image of sub-01 from ses-test. The preprocessing was conducted through fmriprep.

fmri_img = '/Users/peerherholz/Desktop/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz'
anat_img = '/Users/peerherholz/Desktop/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'

We can display the mean functional image and the subject’s anatomy:

from nilearn.image import mean_img
mean_img = mean_img(fmri_img)
from nilearn.plotting import plot_stat_map, plot_anat, plot_img, show, plot_glass_brain
plot_img(mean_img)
plot_anat(anat_img)
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x12e5424a0>
../../../_images/300a444c8734fa6fd673d8376891989421ebb95efaefe182e38d1cd59be792d4.png ../../../_images/bae5c8fa7be3c285f4b0cc2ba5d7e998d787b4541a2f057a098bfd00c6365e0c.png

Specifying the experimental paradigm#

We must now provide a description of the experiment, that is, define the timing of the task and rest periods. This is typically provided in an events.tsv file.

import pandas as pd
events = pd.read_table('/Users/peerherholz/Desktop/ds000114/task-fingerfootlips_events.tsv')
print(events)
    onset  duration  weight trial_type
0      10      15.0       1     Finger
1      40      15.0       1       Foot
2      70      15.0       1       Lips
3     100      15.0       1     Finger
4     130      15.0       1       Foot
5     160      15.0       1       Lips
6     190      15.0       1     Finger
7     220      15.0       1       Foot
8     250      15.0       1       Lips
9     280      15.0       1     Finger
10    310      15.0       1       Foot
11    340      15.0       1       Lips
12    370      15.0       1     Finger
13    400      15.0       1       Foot
14    430      15.0       1       Lips

Performing the GLM analysis#

It is now time to create and estimate a FirstLevelModel object, that will generate the design matrix using the information provided by the events object.

from nilearn.glm.first_level import FirstLevelModel

There are a lot of important parameters one needs to define within a FirstLevelModel and the majority of them will have a prominent influence on your results. Thus, make sure to check them before running your model:

FirstLevelModel?
Init signature:
FirstLevelModel(
    t_r=None,
    slice_time_ref=0.0,
    hrf_model='glover',
    drift_model='cosine',
    high_pass=0.01,
    drift_order=1,
    fir_delays=[0],
    min_onset=-24,
    mask_img=None,
    target_affine=None,
    target_shape=None,
    smoothing_fwhm=None,
    memory=Memory(location=None),
    memory_level=1,
    standardize=False,
    signal_scaling=0,
    noise_model='ar1',
    verbose=0,
    n_jobs=1,
    minimize_memory=True,
    subject_label=None,
    random_state=None,
)
Docstring:     
Implement the General Linear Model for single run :term:`fMRI` data.

Parameters
----------
t_r : float
    This parameter indicates :term:`repetition times<TR>`
    of the experimental runs.
    In seconds. It is necessary to correctly consider times in the design
    matrix. This parameter is also passed to :func:`nilearn.signal.clean`.
    Please see the related documentation for details.

slice_time_ref : float, default=0
    This parameter indicates the time of the reference slice used in the
    slice timing preprocessing step of the experimental runs.
    It is expressed as a fraction of the ``t_r`` (repetition time),
    so it can have values between 0. and 1.

hrf_model : :obj:`str`, function, list of functions, or None
    This parameter defines the :term:`HRF` model to be used.
    It can be a string if you are passing the name of a model
    implemented in Nilearn.
    Valid names are:

        - `"spm"`: This is the :term:`HRF` model used in :term:`SPM`.
          See :func:`nilearn.glm.first_level.spm_hrf`.
        - `"spm + derivative"`: SPM model plus its time derivative.
          This gives 2 regressors.
          See :func:`nilearn.glm.first_level.spm_hrf`, and
          :func:`nilearn.glm.first_level.spm_time_derivative`.
        - `"spm + derivative + dispersion"`: Idem, plus dispersion derivative.
          This gives 3 regressors.
          See :func:`nilearn.glm.first_level.spm_hrf`,
          :func:`nilearn.glm.first_level.spm_time_derivative`,
          and :func:`nilearn.glm.first_level.spm_dispersion_derivative`.
        - `"glover"`: This corresponds to the Glover :term:`HRF`.
          See :func:`nilearn.glm.first_level.glover_hrf`.
        - `"glover + derivative"`: The Glover :term:`HRF` + time derivative.
          This gives 2 regressors.
          See :func:`nilearn.glm.first_level.glover_hrf`, and
          :func:`nilearn.glm.first_level.glover_time_derivative`.
        - `"glover"+ derivative + dispersion"`:
          Idem, plus dispersion derivative.
          This gives 3 regressors.
          See :func:`nilearn.glm.first_level.glover_hrf`,
          :func:`nilearn.glm.first_level.glover_time_derivative`, and
          :func:`nilearn.glm.first_level.glover_dispersion_derivative`.
        - `"fir"`: Finite impulse response basis.
          This is a set of delayed dirac models.

    It can also be a custom model.
    In this case, a function should be provided for each regressor.
    Each function should behave as the other models implemented within Nilearn.
    That is, it should take both `t_r` and `oversampling` as inputs
    and return a sample numpy array of appropriate shape.

    .. note::
        It is expected that `"spm"` standard and `"glover"` models
        would not yield large differences in most cases.

    .. note::
        In case of `"glover"` and `"spm"` models, the derived regressors
        are orthogonalized with respect to the main one.

    Default='glover'.
drift_model : string, default='cosine'
    This parameter specifies the desired drift model for the design
    matrices. It can be 'polynomial', 'cosine' or None.

high_pass : float, default=0.01
    This parameter specifies the cut frequency of the high-pass filter in
    Hz for the design matrices. Used only if drift_model is 'cosine'.

drift_order : int, default=1
    This parameter specifies the order of the drift model (in case it is
    polynomial) for the design matrices.

fir_delays : array of shape(n_onsets) or list, default=[0]
    In case of :term:`FIR` design,
    yields the array of delays used in the :term:`FIR` model,
    in scans.

min_onset : float, default=-24
    This parameter specifies the minimal onset relative to the design
    (in seconds). Events that start before (slice_time_ref * t_r +
    min_onset) are not considered.

mask_img : Niimg-like, NiftiMasker object or False, optional
    Mask to be used on data. If an instance of masker is passed,
    then its mask will be used. If no mask is given,
    it will be computed automatically by a NiftiMasker with default
    parameters. If False is given then the data will not be masked.

target_affine : 3x3 or 4x4 matrix, optional
    This parameter is passed to nilearn.image.resample_img.
    Please see the related documentation for details.

target_shape : 3-tuple of integers, optional
    This parameter is passed to nilearn.image.resample_img.
    Please see the related documentation for details.

smoothing_fwhm : :obj:`float`, optional.
    If `smoothing_fwhm` is not `None`,
    it gives the :term:`full-width at half maximum<FWHM>` in millimeters
    of the spatial smoothing to apply to the signal.
memory : string or pathlib.Path, optional
    Path to the directory used to cache the masking process and the glm
    fit. By default, no caching is done.
    Creates instance of joblib.Memory.

memory_level : integer, optional
    Rough estimator of the amount of memory used by caching. Higher value
    means more memory for caching.

standardize : boolean, default=False
    If standardize is True, the time-series are centered and normed:
    their variance is put to 1 in the time dimension.

signal_scaling : False, int or (int, int), default=0
    If not False, fMRI signals are
    scaled to the mean value of scaling_axis given,
    which can be 0, 1 or (0, 1).
    0 refers to mean scaling each voxel with respect to time,
    1 refers to mean scaling each time point with respect to all voxels &
    (0, 1) refers to scaling with respect to voxels and time,
    which is known as grand mean scaling.
    Incompatible with standardize (standardize=False is enforced when
    signal_scaling is not False).

noise_model : {'ar1', 'ols'}, default='ar1'
    The temporal variance model.

verbose : integer, default=0
    Indicate the level of verbosity. By default, nothing is printed.
    If 0 prints nothing. If 1 prints progress by computation of
    each run. If 2 prints timing details of masker and GLM. If 3
    prints masker computation details.

n_jobs : integer, default=1
    The number of CPUs to use to do the computation. -1 means
    'all CPUs', -2 'all CPUs but one', and so on.

minimize_memory : boolean, default=True
    Gets rid of some variables on the model fit results that are not
    necessary for contrast computation and would only be useful for
    further inspection of model details. This has an important impact
    on memory consumption.

subject_label : string, optional
    This id will be used to identify a `FirstLevelModel` when passed to
    a `SecondLevelModel` object.

random_state : int or numpy.random.RandomState, default=None.
    Random state seed to sklearn.cluster.KMeans
    for autoregressive models
    of order at least 2 ('ar(N)' with n >= 2).

    .. versionadded:: 0.9.1

Attributes
----------
labels_ : array of shape (n_voxels,),
    a map of values on voxels used to identify the corresponding model

results_ : dict,
    with keys corresponding to the different labels values.
    Values are SimpleRegressionResults corresponding to the voxels,
    if minimize_memory is True,
    RegressionResults if minimize_memory is False
File:           ~/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/first_level/first_level.py
Type:           type
Subclasses:     

We need the TR of the functional images, luckily we can extract that information using nibabel:

!nib-ls /data/ds000114/sub-01/ses-test/func//data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz
/data/ds000114/sub-01/ses-test/func//data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz failed

As we can see the TR is 2.5.

fmri_glm = FirstLevelModel(t_r=2.5,
                           noise_model='ar1',
                           hrf_model='spm',
                           drift_model='cosine',
                           high_pass=1./160,
                           signal_scaling=False,
                           minimize_memory=False)

Usually, we also want to include confounds computed during preprocessing (e.g., motion, global signal, etc.) as regressors of no interest. In our example, these were computed by fmriprep and can be found in derivatives/fmriprep/sub-01/func/. We can use pandas to inspect that file:

import pandas as pd
confounds = pd.read_csv('/Users/peerherholz/Desktop/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold_desc-confounds_timeseries.tsv', delimiter='\t')
confounds
WhiteMatter GlobalSignal stdDVARS non-stdDVARS vx-wisestdDVARS FramewiseDisplacement tCompCor00 tCompCor01 tCompCor02 tCompCor03 ... aCompCor02 aCompCor03 aCompCor04 aCompCor05 X Y Z RotX RotY RotZ
0 59.958244 296.705627 NaN NaN NaN NaN -0.945194 0.178691 -0.008839 0.023860 ... 0.001226 -0.070598 0.040292 -0.004502 -6.811590e-03 0.000000 0.000000 0.000000 -0.000000 0.000000
1 -0.992987 -6.791035 10.967166 359.572876 15.125442 0.522051 0.095183 -0.081205 0.043344 -0.067988 ... -0.006754 0.102960 -0.020727 0.005408 -1.198610e-09 -0.035396 0.386632 0.001864 -0.000000 0.000000
2 0.121577 -8.960917 0.965478 31.654449 1.251122 0.053361 0.039534 -0.048124 0.023519 -0.009606 ... -0.006754 0.096988 -0.022278 -0.102161 0.000000e+00 -0.019626 0.383440 0.001176 -0.000000 0.000000
3 -3.207463 -15.836485 0.568300 18.632469 0.933274 0.079333 0.064448 -0.049706 0.013449 -0.031648 ... -0.036826 0.078502 0.003200 -0.112108 0.000000e+00 -0.032375 0.430492 0.001567 -0.000000 0.000000
4 -3.531865 -16.072039 0.412484 13.523833 1.089049 0.030417 0.058324 -0.050976 0.013930 -0.027509 ... -0.024400 0.069612 0.006408 -0.060869 2.562640e-10 -0.032767 0.404579 0.001485 -0.000000 0.000000
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
179 -1.131588 6.617327 0.796877 26.126665 0.812802 0.162573 0.001240 -0.007174 -0.044028 -0.010630 ... -0.076143 0.074762 0.004733 0.020203 -5.966000e-01 0.258395 1.762940 0.000232 0.003428 -0.013078
180 -2.441096 3.395323 0.606146 19.873285 0.689635 0.080021 0.003447 0.006033 -0.038060 0.003348 ... 0.006482 0.081220 0.004589 -0.054566 -6.161150e-01 0.237964 1.772880 0.000388 0.003602 -0.013351
181 -4.028681 -0.000021 0.725305 23.780071 0.768956 0.195117 0.008511 0.018631 -0.045767 0.042005 ... 0.019911 -0.027145 -0.028081 0.013844 -6.761900e-01 0.216604 1.814620 -0.000184 0.003883 -0.013937
182 -4.716608 0.375874 0.549176 18.005463 0.654098 0.085730 0.009503 0.000066 -0.031858 -0.026816 ... 0.026724 0.012923 0.025031 0.012589 -6.877110e-01 0.198889 1.825820 -0.000529 0.004307 -0.013800
183 -2.884027 2.981873 0.582871 19.110176 0.675460 0.453545 0.004254 -0.009564 -0.043303 -0.052722 ... -0.014826 0.072763 0.044807 -0.027618 -5.796220e-01 0.244931 1.756250 0.000987 0.003099 -0.011926

184 rows × 24 columns

Comparable to other neuroimaging softwards, we have a timepoint x confound dataframe. However, fmriprep computes way more confounds than most of you are used to and that require a bit of reading to understand and therefore utilize properly. We therefore and for the sake of simplicity stick to the “classic” ones: WhiteMatter, GlobalSignal, FramewiseDisplacement and the motion correction parameters in translation and rotation:

import numpy as np
confounds_glm = confounds[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']].replace(np.nan, 0)
confounds_glm
WhiteMatter GlobalSignal FramewiseDisplacement X Y Z RotX RotY RotZ
0 59.958244 296.705627 0.000000 -6.811590e-03 0.000000 0.000000 0.000000 -0.000000 0.000000
1 -0.992987 -6.791035 0.522051 -1.198610e-09 -0.035396 0.386632 0.001864 -0.000000 0.000000
2 0.121577 -8.960917 0.053361 0.000000e+00 -0.019626 0.383440 0.001176 -0.000000 0.000000
3 -3.207463 -15.836485 0.079333 0.000000e+00 -0.032375 0.430492 0.001567 -0.000000 0.000000
4 -3.531865 -16.072039 0.030417 2.562640e-10 -0.032767 0.404579 0.001485 -0.000000 0.000000
... ... ... ... ... ... ... ... ... ...
179 -1.131588 6.617327 0.162573 -5.966000e-01 0.258395 1.762940 0.000232 0.003428 -0.013078
180 -2.441096 3.395323 0.080021 -6.161150e-01 0.237964 1.772880 0.000388 0.003602 -0.013351
181 -4.028681 -0.000021 0.195117 -6.761900e-01 0.216604 1.814620 -0.000184 0.003883 -0.013937
182 -4.716608 0.375874 0.085730 -6.877110e-01 0.198889 1.825820 -0.000529 0.004307 -0.013800
183 -2.884027 2.981873 0.453545 -5.796220e-01 0.244931 1.756250 0.000987 0.003099 -0.011926

184 rows × 9 columns

Now that we have specified the model, we can run it on the fMRI image

fmri_glm = fmri_glm.fit(fmri_img, events, confounds_glm)
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/first_level/experimental_paradigm.py:167: UserWarning: The following unexpected columns in events data will be ignored: weight
  warnings.warn(

One can inspect the design matrix (rows represent time, and columns contain the predictors).

design_matrix = fmri_glm.design_matrices_[0]

Formally, we have taken the first design matrix, because the model is implictily meant to for multiple runs.

from nilearn.plotting import plot_design_matrix
plot_design_matrix(design_matrix)
import matplotlib.pyplot as plt
plt.show()
../../../_images/67d947c87649bd257cc833501c7aa156fc1947e49371adde702aff2fbab6eafb.png

Save the design matrix image to disk, first creating a directory where you want to write the images:

import os
outdir = '/Users/peerherholz/Desktop/ds000114/derivatives/GLM'
if not os.path.exists(outdir):
    os.mkdir(outdir)

from os.path import join
plot_design_matrix(design_matrix, output_file=join(outdir, 'design_matrix.png'))

The first column contains the expected reponse profile of regions which are sensitive to the “Finger” task. Let’s plot this first column:

plt.plot(design_matrix['Finger'])
plt.xlabel('scan')
plt.title('Expected Response for condition "Finger"')
plt.show()
../../../_images/5e0cdfd87efed1a5443d6af945a912c8d26eed9a0139b3485ff71f3608de3c0f.png

Detecting voxels with significant effects#

To access the estimated coefficients (Betas of the GLM model), we created constrast with a single ‘1’ in each of the columns: The role of the contrast is to select some columns of the model –and potentially weight them– to study the associated statistics. So in a nutshell, a contrast is a weigted combination of the estimated effects. Here we can define canonical contrasts that just consider the two condition in isolation —let’s call them “conditions”— then a contrast that makes the difference between these conditions.

from numpy import array
conditions = {
    'active - Finger': array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
    'active - Foot':   array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
    'active - Lips':   array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
}

Let’s look at it: plot the coefficients of the contrast, indexed by the names of the columns of the design matrix.

from nilearn.plotting import plot_contrast_matrix
plot_contrast_matrix(conditions['active - Finger'], design_matrix=design_matrix)
<Axes: label='conditions'>
../../../_images/a20c72eadf6fb9abe2a577027a603c06ca52920f4fad561ddf9d1aa88a69cf02.png

Below, we compute the estimated effect. It is in BOLD signal unit, but has no statistical guarantees, because it does not take into account the associated variance.

eff_map = fmri_glm.compute_contrast(conditions['active - Finger'],
                                    output_type='effect_size')

In order to get statistical significance, we form a t-statistic, and directly convert is into z-scale. The z-scale means that the values are scaled to match a standard Gaussian distribution (mean=0, variance=1), across voxels, if there were now effects in the data.

z_map = fmri_glm.compute_contrast(conditions['active - Finger'],
                                  output_type='z_score')

Plot thresholded z scores map.

We display it on top of the average functional image of the series (could be the anatomical image of the subject). We use arbitrarily a threshold of 3.0 in z-scale. We’ll see later how to use corrected thresholds. we show to display 3 axial views: display_mode=’z’, cut_coords=3

plot_stat_map(z_map, bg_img=mean_img, threshold=3.0,
              display_mode='z', cut_coords=3, black_bg=True,
              title='active - Finger (Z>3)')
plt.show()
../../../_images/8bce95c392efb1d2e68400898144e8267479d3fdbd5c59b50a59b724c5aa7177.png
plot_glass_brain(z_map, threshold=3.0, black_bg=True, plot_abs=False,
                 title='active - Finger (Z>3)')
plt.show()
../../../_images/f52205997d04c6fc1b9b8a42d0c3a1cde744b25d4a8ac71e4ab0dd713ff04cc9.png

Statistical signifiance testing. One should worry about the statistical validity of the procedure: here we used an arbitrary threshold of 3.0 but the threshold should provide some guarantees on the risk of false detections (aka type-1 errors in statistics). One first suggestion is to control the false positive rate (fpr) at a certain level, e.g. 0.001: this means that there is.1% chance of declaring active an inactive voxel.

from nilearn.glm.thresholding import threshold_stats_img
_, threshold = threshold_stats_img(z_map, alpha=.001, height_control='fpr')
print('Uncorrected p<0.001 threshold: %.3f' % threshold)
plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,
              display_mode='z', cut_coords=3, black_bg=True,
              title='active - Finger (p<0.001)')
plt.show()
Uncorrected p<0.001 threshold: 3.291
../../../_images/d898357e1bf31e162e2b083576e3069eaa0c27c00a214eb17feff0cfccc62390.png
plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False,
                 title='active - Finger (p<0.001)')
plt.show()
../../../_images/b0838dbcbb36234b039e4a5a90671a14d92baa97c63aa63a8ce8795434dce63e.png

The problem is that with this you expect 0.001 * n_voxels to show up while they’re not active — tens to hundreds of voxels. A more conservative solution is to control the family wise errro rate, i.e. the probability of making ony one false detection, say at 5%. For that we use the so-called Bonferroni correction:

_, threshold = threshold_stats_img(z_map, alpha=.05, height_control='bonferroni')
print('Bonferroni-corrected, p<0.05 threshold: %.3f' % threshold)
plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,
              display_mode='z', cut_coords=3, black_bg=True,
              title='active - Finger (p<0.05, corrected)')
plt.show()
Bonferroni-corrected, p<0.05 threshold: 4.763
../../../_images/cc218e31ed58dfdba7066d8fe8d6a80ddae8d3f784778cf56408d4a8cf586f60.png
plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False,
                 title='active - Finger (p<0.05, corrected)')
plt.show()
../../../_images/fdaa33ed7ed6ad15139fa085d1f582787f815131cc141921685e26a75a1628f4.png

This is quite conservative indeed ! A popular alternative is to control the false discovery rate, i.e. the expected proportion of false discoveries among detections. This is called the false disovery rate.

_, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr')
print('False Discovery rate = 0.05 threshold: %.3f' % threshold)
plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,
              display_mode='z', cut_coords=3, black_bg=True,
              title='active - Finger (fdr=0.05)')
plt.show()
False Discovery rate = 0.05 threshold: 3.779
../../../_images/288eb2414af810f0e98d93c2231a728e5dd5acfe7fe808fb8a5f8183b1956455.png
plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False,
                 title='active - Finger (fdr=0.05)')
plt.show()
../../../_images/45e3fc0ac41dee6bec14d883c1551e04379b836254d26a25a2888cf78753f3d7.png

Finally people like to discard isolated voxels (aka “small clusters”) from these images. It is possible to generate a thresholded map with small clusters removed by providing a cluster_threshold argument. here clusters smaller than 10 voxels will be discarded.

clean_map, threshold = threshold_stats_img(
    z_map, alpha=.05, height_control='fdr', cluster_threshold=10)
plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold,
              display_mode='z', cut_coords=3, black_bg=True, colorbar=False,
              title='active - Finger (fdr=0.05), clusters > 10 voxels')
plt.show()
../../../_images/1bb745db942d44b1a96fcf635ad6e157fdd79a92408bc1b9b5f56c43342f5d20.png
plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False,
                 title='active - Finger (fdr=0.05), clusters > 10 voxels)')
plt.show()
../../../_images/928d5f6d608fa69ab5e5cfecf5361e33ca10db3d911963e61ffd3e725cae5a49.png

We can save the effect and zscore maps to the disk

z_map.to_filename(join(outdir, 'sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz'))
eff_map.to_filename(join(outdir, 'sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_effmap.nii.gz'))

Report the found positions in a table

from nilearn.reporting import get_clusters_table
table = get_clusters_table(z_map, stat_threshold=threshold,
                           cluster_threshold=20)
print(table)
  Cluster ID     X     Y     Z  Peak Stat Cluster Size (mm3)
0          1  44.0 -12.0  58.0   6.761408               1600
1         1a  40.0 -16.0  50.0   5.930616                   

This table can be saved for future use:

table.to_csv(join(outdir, 'table.csv'))

Or use atlasreader to get even more information and informative figures:

from atlasreader import create_output
from os.path import join
create_output(join(outdir, 'sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz'),
              cluster_extent=5, voxel_thresh=threshold)
The Python package you are importing, AtlasReader, is licensed under the
BSD-3 license; however, the atlases it uses are separately licensed under more
restrictive frameworks.
By using AtlasReader, you agree to abide by the license terms of the
individual atlases. Information on these terms can be found online at:
https://github.com/miykael/atlasreader/tree/master/atlasreader/data

Let’s have a look at the csv file containing relevant information about the peak of each cluster. This table contains the cluster association and location of each peak, its signal value at this location, the cluster extent (in mm, not in number of voxels), as well as the membership of each peak, given a particular atlas.

peak_info = pd.read_csv(join(outdir, 'sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap_peaks.csv'))
peak_info
cluster_id peak_x peak_y peak_z peak_value volume_mm aal desikan_killiany harvard_oxford
0 1.0 44.0 -12.0 58.0 6.761408 1600.0 Precentral_R ctx-rh-precentral 63.0% Right_Precentral_Gyrus; 13.0% Right_Post...
1 2.0 -36.0 -52.0 -22.0 5.810815 960.0 Fusiform_L Unknown 64.0% Left_Temporal_Occipital_Fusiform_Cortex;...
2 3.0 40.0 -4.0 66.0 6.208025 384.0 no_label Unknown 16.0% Right_Precentral_Gyrus

And the clusters:

cluster_info = pd.read_csv(join(outdir, 'sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap_clusters.csv'))
cluster_info
cluster_id peak_x peak_y peak_z cluster_mean volume_mm aal desikan_killiany harvard_oxford
0 1.0 44.0 -12.0 58.0 4.622128 1600.0 92.00% Precentral_R; 8.00% Postcentral_R 44.00% Unknown; 28.00% Right-Cerebral-White-Ma... 88.00% Right_Precentral_Gyrus; 12.00% Right_Po...
1 2.0 -36.0 -52.0 -22.0 4.546988 960.0 66.67% Fusiform_L; 20.00% Cerebelum_6_L; 13.33... 53.33% Unknown; 33.33% Left-Cerebellum-Cortex;... 86.67% Left_Temporal_Occipital_Fusiform_Cortex...
2 3.0 40.0 -4.0 66.0 4.931304 384.0 66.67% Frontal_Sup_2_R; 33.33% no_label 83.33% Unknown; 16.67% ctx-rh-precentral 66.67% Right_Precentral_Gyrus; 33.33% Right_Mi...

For each cluster, we also get a corresponding visualization, saved as .png:

from IPython.display import Image
Image(join(outdir, "sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.png"))
../../../_images/4a76d01c6cf2931dd387717091f846a4d51c35d5fbc65dec1c541c028a53b81f.png
Image(join(outdir, "sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap_cluster01.png"))
../../../_images/e5d38fd1c24b8e68d931f2f0f8f2d18b121dc5dae98378ff37dc72083230b2b3.png
Image(join(outdir, "sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap_cluster02.png"))
../../../_images/bfab6097a589498c4f31c96edbcbf7e3ceba9fa9a947ebb6ab8418b177116a88.png
Image(join(outdir, "sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap_cluster03.png"))
../../../_images/dc0d8d91020cd83dc81cfd32a22fa7643f3e4671c75ac963c2db3c75e2f3acf5.png

But wait, there’s more! There’s even a functionality to create entire GLM reports including information regarding the model and its parameters, design matrix, contrasts, etc. . All we need is the make_glm_report function from nilearn.reporting and apply it to our fitted GLM, specifying a contrast of interest.

from nilearn.reporting import make_glm_report

report = make_glm_report(fmri_glm,
                         contrasts='Finger',
                         bg_img=mean_img
                         )

Once generated, we have several options to view the GLM report: directly in the notebook, in the browser or save it as an html file:

report
#report.open_in_browser()
#report.save_as_html("GLM_report.html")

Performing an F-test#

“active vs rest” is a typical t test: condition versus baseline. Another popular type of test is an F test in which one seeks whether a certain combination of conditions (possibly two-, three- or higher-dimensional) explains a significant proportion of the signal. Here one might for instance test which voxels are well explained by combination of the active and rest condition.

import numpy as np
effects_of_interest = np.vstack((conditions['active - Finger'], conditions['active - Lips']))
plot_contrast_matrix(effects_of_interest, design_matrix)
plt.show()
../../../_images/7a20210b98dddaacfdfdf026355ea88975dd55af0a7fe036d2df65f7c30670a8.png

Specify the contrast and compute the correspoding map. Actually, the contrast specification is done exactly the same way as for t contrasts.

z_map = fmri_glm.compute_contrast(effects_of_interest,
                                  output_type='z_score')

Note that the statistic has been converted to a z-variable, which makes it easier to represent it.

clean_map, threshold = threshold_stats_img(
    z_map, alpha=.05, height_control='fdr', cluster_threshold=0)
plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold,
              display_mode='z', cut_coords=3, black_bg=True,
              title='Effects of interest (fdr=0.05), clusters > 10 voxels', cmap='magma')
plt.show()
../../../_images/38de72ee9b16929ecfc79b58c2ea4a2e3934def43783fe9f7f90c95488b55ee9.png

Evaluating models#

While not commonly done, it’s a very good and important idea to actually evaluate your model in terms of its fit. We can do that comprehensively, yet easily through nilearn functionality. In more detail, we’re going to inspect the residuals and evaluate the predicted time series. Let’s do this for the peak voxels. At first, we have to extract them using get_clusters_table:

table = get_clusters_table(z_map, stat_threshold=1,
                           cluster_threshold=20).set_index('Cluster ID', drop=True)
table.head()
X Y Z Peak Stat Cluster Size (mm3)
Cluster ID
1 -52.0 -12.0 38.0 8.199778 318976
1a 56.0 -8.0 34.0 7.590617
1b 40.0 -4.0 66.0 7.410259
1c -32.0 -52.0 -22.0 6.705043
2 -12.0 -92.0 34.0 5.798149 8064

From this dataframe, we get the largest clusters and prepare a masker to extract their time series:

from nilearn import input_data

# get the largest clusters' max x, y, and z coordinates
coords = table.loc[range(1, 5), ['X', 'Y', 'Z']].values


# extract time series from each coordinate
masker = input_data.NiftiSpheresMasker(coords)

Get and check model residuals#

We can simply obtain the residuals of the peak voxels from our fitted model via applying the prepared masker (and thus peak voxel) to the residuals our:

resid = masker.fit_transform(fmri_glm.residuals[0])

And now, we can plot them and evaluate our peak voxels based on their distribution of residuals:

# colors for each of the clusters
colors = ['blue', 'navy', 'purple', 'magenta', 'olive', 'teal']


fig2, axs2 = plt.subplots(2, 3)
axs2 = axs2.flatten()
for i in range(0, 4):
    axs2[i].set_title('Cluster peak {}\n'.format(coords[i]))
    axs2[i].hist(resid[:, i], color=colors[i])
    print('Mean residuals: {}'.format(resid[:, i].mean()))

fig2.set_size_inches(12, 7)
fig2.tight_layout()
Mean residuals: 585.3252742766122
Mean residuals: 253.89929545754816
Mean residuals: 293.1701082605032
Mean residuals: 433.91214469823296
../../../_images/2ebfd587732e809a7937286e067138348ea29cfe7b24172d1389365fa95106e3.png

Get and check predicted time series#

In order to evaluate the predicted time series we need to extract them, as well as the actual time series. To do so, we can use the masker again:

real_timeseries = masker.fit_transform(fmri_img)
predicted_timeseries = masker.fit_transform(fmri_glm.predicted[0])

Having obtained both time series, we can plot them against each other. To make it more informative, we will also visualize the respective peak voxels on the mean functional image:

from nilearn import plotting

# plot the time series and corresponding locations
fig1, axs1 = plt.subplots(2, 4)
for i in range(0, 4):
    # plotting time series
    axs1[0, i].set_title('Cluster peak {}\n'.format(coords[i]))
    axs1[0, i].plot(real_timeseries[:, i], c=colors[i], lw=2)
    axs1[0, i].plot(predicted_timeseries[:, i], c='r', ls='--', lw=2)
    axs1[0, i].set_xlabel('Time')
    axs1[0, i].set_ylabel('Signal intensity', labelpad=0)
    # plotting image below the time series
    roi_img = plotting.plot_stat_map(
        z_map, cut_coords=[coords[i][2]], threshold=3.1, figure=fig1,
        axes=axs1[1, i], display_mode='z', colorbar=False, bg_img=mean_img, cmap='magma')
    roi_img.add_markers([coords[i]], colors[i], 300)
fig1.set_size_inches(24, 14)
../../../_images/da9d5add3d8f2226534015779e3ad264e0d3047222041018be8ac2d51a8ed754.png

Plot the R-squared#

Another option to evaluate our model is to plot the R-squared, that is the amount of variance explained through our GLM in total. While this plot will be informative, its interpretation will be limited as we can’t tell if a voxel exhibits a large R-squared because of a response to a condition in our experiment or to noise. For these things, one should employ F-Tests as shown above. However, as expected we see that the R-squared decreases the further away voxels are from the receive coils (e.g. deeper in the brain).

plotting.plot_stat_map(fmri_glm.r_square[0], bg_img=mean_img, threshold=.1,
                       display_mode='z', cut_coords=7, cmap='magma')
<nilearn.plotting.displays._slicers.ZSlicer at 0x131cfc040>
../../../_images/4b4338d63f130353ff8aebc326f55077c2a2fc1dd3a148ee3cfc78863ad0089c.png

Group level statistics#

Now that we’ve explored the individual level analysis quite a bit, one might ask: but what about group level statistics? No problem at all, nilearn’s GLM functionality of course supports this as well. As in other software packages, we need to repeat the individual level analysis for each subject to obtain the same contrast images, that we can submit to a group level analysis.

Run individual level analysis for multiple participants#

By now, we know how to do this easily. Let’s use a simple for loop to repeat the analysis from above for sub-02 and sub-03.

for subject in ['02', '03']:

    # set the fMRI image
    fmri_img = '/Users/peerherholz/Desktop/ds000114/derivatives/fmriprep/sub-%s/ses-test/func/sub-%s_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz' %(subject, subject)
    
    # read in the events 
    events = pd.read_table('/Users/peerherholz/Desktop/ds000114/task-fingerfootlips_events.tsv')
    
    # read in the confounds
    confounds = pd.read_table('/Users/peerherholz/Desktop/ds000114/derivatives/fmriprep/sub-%s/ses-test/func/sub-%s_ses-test_task-fingerfootlips_bold_desc-confounds_timeseries.tsv' %(subject, subject))
    
    # restrict the to be included confounds to a subset
    confounds_glm = confounds[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']].replace(np.nan, 0)
    
    # run the GLM
    fmri_glm = fmri_glm.fit(fmri_img, events, confounds_glm)
    
    # compute the contrast as a z-map
    z_map = fmri_glm.compute_contrast(conditions['active - Finger'],
                                  output_type='z_score')
    
    # save the z-map
    z_map.to_filename(join(outdir, 'sub-%s_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz' %subject))
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/first_level/experimental_paradigm.py:167: UserWarning: The following unexpected columns in events data will be ignored: weight
  warnings.warn(
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/_utils.py:180: UserWarning: Matrix is singular at working precision, regularizing...
  warn("Matrix is singular at working precision, regularizing...")
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/first_level/experimental_paradigm.py:167: UserWarning: The following unexpected columns in events data will be ignored: weight
  warnings.warn(

Define a group level model#

As we now have the same contrast from multiple subjects we can define our group level model. At first, we need to gather the individual contrast maps:

from glob import glob
list_z_maps = glob(join(outdir, 'sub-*_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz'))


list_z_maps
['/Users/peerherholz/Desktop/ds000114/derivatives/GLM/sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz',
 '/Users/peerherholz/Desktop/ds000114/derivatives/GLM/sub-02_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz',
 '/Users/peerherholz/Desktop/ds000114/derivatives/GLM/sub-03_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz']

Great! The next step includes the definition of a design matrix. As we want to run a simple one-sample t-test, we just need to indicate as many 1 as we have z-maps:

design_matrix = pd.DataFrame([1] * len(list_z_maps),
                             columns=['intercept'])

Believe it or not, that’s all it takes. Within the next step we can already set and run our model. It’s basically identical to the First_level_model: we need to define the images and design matrix:

from nilearn.glm.second_level import SecondLevelModel
second_level_model = SecondLevelModel()
second_level_model = second_level_model.fit(list_z_maps,
                                            design_matrix=design_matrix)

The same holds true for contrast computation:

z_map_group = second_level_model.compute_contrast(output_type='z_score')

What do we get? After defining a liberal threshold of p<0.001 (uncorrected), we can plot our computed group level contrast image:

from scipy.stats import norm
p001_unc = norm.isf(0.001)

plotting.plot_glass_brain(z_map_group, colorbar=True, threshold=p001_unc,
                          title='Group Finger tapping (unc p<0.001)',
                          plot_abs=False, display_mode='x', cmap='magma')
plotting.show()
../../../_images/00bf32eb620c3f3d48eb36e0b94fe48d3dacee89bf82985717d0e1b78fc5f470.png

Well, not much going there…But please remember we also just included three participants. Besides this rather simple model, nilearn’s GLM functionality of course also allows you to run paired t-test, two-sample t-test, F-test, etc. . As shown above, you also can define different thresholds and multiple comparison corrections. There’s yet another cool thing we didn’t talk about. It’s possible to run analyses in a rather automated way if your dataset is in BIDS.

Performing statistical analyses on BIDS datasets#

Even though model specification and running was comparably easy and straightforward, it can be even better. Nilearn’s GLM functionality actually enables you to define models for multiple participants through one function by leveraging the BIDS standard. More precisely, the function first_level_from_bids takes the same input arguments as First_Level_model (e.g. t_r, hrf_model, high_pass, etc.), but through defining the BIDS raw and derivatives folder, as well as a task and space label automatically extracts all information necessary to run individual level models and creates the model itself for all participants.

from nilearn.glm.first_level import first_level_from_bids

data_dir = '/Users/peerherholz/Desktop/ds000114/'
task_label = 'fingerfootlips'
space_label = 'MNI152nlin2009casym'
derivatives_folder = 'derivatives/fmriprep'

models, models_run_imgs, models_events, models_confounds = \
    first_level_from_bids(data_dir, task_label, space_label,
                          smoothing_fwhm=5.0,
                          derivatives_folder=derivatives_folder, 
                          t_r=2.5, 
                          noise_model='ar1',
                          hrf_model='spm',
                          drift_model='cosine',
                          high_pass=1./160,
                          signal_scaling=False,
                          minimize_memory=False)
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/first_level/first_level.py:1327: UserWarning: 't_r' provided (2.5) is different from the value found in the BIDS dataset (None).
Note this may lead to the wrong model specification.
  warn(

Done, let’s check if things work as expected. As an example, we will have a look at the information for sub-01. We’re going to start with the images.

import os
print([os.path.basename(run) for run in models_run_imgs[0]])
['sub-01_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz']

Looks good. How about confounds?

print(models_confounds[0][0])
     WhiteMatter  GlobalSignal   stdDVARS  non-stdDVARS  vx-wisestdDVARS  \
0      59.958244    296.705627        NaN           NaN              NaN   
1      -0.992987     -6.791035  10.967166    359.572876        15.125442   
2       0.121577     -8.960917   0.965478     31.654449         1.251122   
3      -3.207463    -15.836485   0.568300     18.632469         0.933274   
4      -3.531865    -16.072039   0.412484     13.523833         1.089049   
..           ...           ...        ...           ...              ...   
179    -1.131588      6.617327   0.796877     26.126665         0.812802   
180    -2.441096      3.395323   0.606146     19.873285         0.689635   
181    -4.028681     -0.000021   0.725305     23.780071         0.768956   
182    -4.716608      0.375874   0.549176     18.005463         0.654098   
183    -2.884027      2.981873   0.582871     19.110176         0.675460   

     FramewiseDisplacement  tCompCor00  tCompCor01  tCompCor02  tCompCor03  \
0                      NaN   -0.945194    0.178691   -0.008839    0.023860   
1                 0.522051    0.095183   -0.081205    0.043344   -0.067988   
2                 0.053361    0.039534   -0.048124    0.023519   -0.009606   
3                 0.079333    0.064448   -0.049706    0.013449   -0.031648   
4                 0.030417    0.058324   -0.050976    0.013930   -0.027509   
..                     ...         ...         ...         ...         ...   
179               0.162573    0.001240   -0.007174   -0.044028   -0.010630   
180               0.080021    0.003447    0.006033   -0.038060    0.003348   
181               0.195117    0.008511    0.018631   -0.045767    0.042005   
182               0.085730    0.009503    0.000066   -0.031858   -0.026816   
183               0.453545    0.004254   -0.009564   -0.043303   -0.052722   

     ...  aCompCor02  aCompCor03  aCompCor04  aCompCor05             X  \
0    ...    0.001226   -0.070598    0.040292   -0.004502 -6.811590e-03   
1    ...   -0.006754    0.102960   -0.020727    0.005408 -1.198610e-09   
2    ...   -0.006754    0.096988   -0.022278   -0.102161  0.000000e+00   
3    ...   -0.036826    0.078502    0.003200   -0.112108  0.000000e+00   
4    ...   -0.024400    0.069612    0.006408   -0.060869  2.562640e-10   
..   ...         ...         ...         ...         ...           ...   
179  ...   -0.076143    0.074762    0.004733    0.020203 -5.966000e-01   
180  ...    0.006482    0.081220    0.004589   -0.054566 -6.161150e-01   
181  ...    0.019911   -0.027145   -0.028081    0.013844 -6.761900e-01   
182  ...    0.026724    0.012923    0.025031    0.012589 -6.877110e-01   
183  ...   -0.014826    0.072763    0.044807   -0.027618 -5.796220e-01   

            Y         Z      RotX      RotY      RotZ  
0    0.000000  0.000000  0.000000 -0.000000  0.000000  
1   -0.035396  0.386632  0.001864 -0.000000  0.000000  
2   -0.019626  0.383440  0.001176 -0.000000  0.000000  
3   -0.032375  0.430492  0.001567 -0.000000  0.000000  
4   -0.032767  0.404579  0.001485 -0.000000  0.000000  
..        ...       ...       ...       ...       ...  
179  0.258395  1.762940  0.000232  0.003428 -0.013078  
180  0.237964  1.772880  0.000388  0.003602 -0.013351  
181  0.216604  1.814620 -0.000184  0.003883 -0.013937  
182  0.198889  1.825820 -0.000529  0.004307 -0.013800  
183  0.244931  1.756250  0.000987  0.003099 -0.011926  

[184 rows x 24 columns]

Ah, the NaN again. Let’s fix those as we did last time, but for all participants.

models_confounds_no_nan = []

for confounds in models_confounds:
    models_confounds_no_nan.append(confounds[0].fillna(0)[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']])

Last but not least: how do the events look?

print(models_events[0][0]['trial_type'].value_counts())
trial_type
Finger    5
Foot      5
Lips      5
Name: count, dtype: int64

Fantastic, now we’re ready to run our models. With a little zip magic this is done without a problem. We also going to compute z-maps as before and plot them side by side.

from nilearn import plotting
import matplotlib.pyplot as plt

models_fitted = [] 

fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(12, 8.5))
model_and_args = zip(models, models_run_imgs, models_events, models_confounds_no_nan)
for midx, (model, imgs, events, confounds) in enumerate(model_and_args):
    # fit the GLM
    model.fit(imgs, events, confounds)
        
    models_fitted.append(model)
    
    # compute the contrast of interest
    zmap = model.compute_contrast('Finger')
    plotting.plot_glass_brain(zmap, colorbar=False, threshold=p001_unc,
                              title=('sub-' + model.subject_label),
                              axes=axes[int(midx-1)],
                              plot_abs=False, display_mode='x', cmap='magma')
fig.suptitle('subjects z_map finger tapping (unc p<0.001)')
plotting.show()
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/first_level/experimental_paradigm.py:167: UserWarning: The following unexpected columns in events data will be ignored: weight
  warnings.warn(
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/first_level/experimental_paradigm.py:167: UserWarning: The following unexpected columns in events data will be ignored: weight
  warnings.warn(
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/_utils.py:180: UserWarning: Matrix is singular at working precision, regularizing...
  warn("Matrix is singular at working precision, regularizing...")
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/first_level/experimental_paradigm.py:167: UserWarning: The following unexpected columns in events data will be ignored: weight
  warnings.warn(
/Users/peerherholz/anaconda3/envs/nowaschool/lib/python3.10/site-packages/nilearn/glm/first_level/experimental_paradigm.py:167: UserWarning: The following unexpected columns in events data will be ignored: weight
  warnings.warn(
../../../_images/9f258b803972dd07a8e4833bf2eb703205debd9703ac5ee63abf956602a90208.png

That looks about right. However, let’s also check the design matrix

from nilearn.plotting import plot_design_matrix
plot_design_matrix(models_fitted[0].design_matrices_[0])
<Axes: label='conditions', ylabel='scan number'>
../../../_images/67d947c87649bd257cc833501c7aa156fc1947e49371adde702aff2fbab6eafb.png

and contrast matrix.

plot_contrast_matrix('Finger', models_fitted[0].design_matrices_[0])
plt.show()
../../../_images/a20c72eadf6fb9abe2a577027a603c06ca52920f4fad561ddf9d1aa88a69cf02.png

Nothing to complain here and thus we can move on to the group level model. Instead of assembling contrast images from each participant, we also have the option to simply provide the fitted individual level models as input.

from nilearn.glm.second_level import SecondLevelModel
second_level_input = models_fitted

That’s all it takes and we can run our group level model again.

second_level_model = SecondLevelModel()
second_level_model = second_level_model.fit(second_level_input)

And after computing the contrast

zmap = second_level_model.compute_contrast(
    first_level_contrast='Finger')

we can plot the results again.

plotting.plot_glass_brain(zmap, colorbar=True, threshold=p001_unc,
                          title='Group Finger tapping (unc p<0.001)',
                          plot_abs=False, display_mode='x', cmap='magma')
plotting.show()
../../../_images/fc76c8fedd4d82ede7a3fbb69b7143e353907b2001212035731f4e839964cf62.png

If we want, we can also easily inspect results via an interactive surface plot as shown a few times before (please note that we change the thresholds as we have very little voxels remaining due to the limited number of participants):

from nilearn.plotting import view_img_on_surf
view_img_on_surf(zmap, threshold='75%', cmap='magma')

That’s all for now. Please note, that we only showed a small part of what’s possible. Make sure to check the documentation and the examples it includes. We hope we could show you how powerful nilearn will be through including GLM functionality. While there’s already a lot you can do, there will be even more in the future.