tedana: TE Dependent ANAlysis#

Process five-echo flashing checkerboard dataset for software demo#

Author: Daniel Handwerker & Monika Doerig

Date: 23 June 2025

Citation:#

Tools included in this workflow#

  • tedana: The tedana Community, Ahmed, Z., Bandettini, P. A., Bottenhorn, K. L., Caballero-Gaudes, C., Dowdle, L. T., DuPre, E., Gonzalez-Castillo, J., Handwerker, D., Heunis, S., Kundu, P., Laird, A. R., Markello, R., Markiewicz, C. J., Maullin-Sapey, T., Moia, S., Molfese, P., Salo, T., Staden, I., … Whitaker, K. (2025). ME-ICA/tedana: 25.0.1 (25.0.1). Zenodo. https://doi.org/10.5281/zenodo.15610868

Publications#

Educational resources#

Dataset#

  • DuPre, E., Salo, T., Whitaker, K. J., Teves, J., Dowdle, L., Reynolds, R. C., & Handwerker, D. A. (2024, February 21). tedana data. Retrieved from osf.io/bpe8h

%%capture
! pip install tedana==25.0.1
import module
await module.load('afni/24.3.00')
await module.list()
Lmod Warning: MODULEPATH is undefined.



Lmod has detected the following error: The following module(s) are unknown:
"afni/24.3.00"

Please check the spelling or version number. Also try "module spider ..."
It is also possible your cache file is out-of-date; it may help to try:
  $ module --ignore_cache load "afni/24.3.00"

Also make sure that all modulefiles written in TCL start with the string
#%Module
[]
%matplotlib inline
import os
import os.path as op
from glob import glob
import webbrowser

from tedana.workflows import tedana_workflow

Download 5 echo data#

%%time
dset_dir5 = 'five-echo-dataset/'
wd = os.getcwd()

if not op.isdir(dset_dir5):
    os.mkdir(dset_dir5)

!curl -L -o five_echo_NIH.tar.xz https://osf.io/ea5v3/download
!tar xf five_echo_NIH.tar.xz -C five-echo-dataset
os.remove('five_echo_NIH.tar.xz')
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0   0     0   0     0     0     0  --:--:-- --:--:-- --:--:--     0
100   162 100   162   0     0   420     0  --:--:-- --:--:-- --:--:--   420
100   247 100   247   0     0   403     0  --:--:-- --:--:-- --:--:--   403
100   357 100   357   0     0   416     0  --:--:-- --:--:-- --:--:--   416
  0     0   0     0   0     0     0     0  --:--:-- --:--:-- --:--:--     0
  0     0   0     0   0     0     0     0  --:--:--  0:00:02 --:--:--     0
  0     0   0     0   0     0     0     0  --:--:--  0:00:02 --:--:--     0
  0     0   0     0   0     0     0     0  --:--:--  0:00:03 --:--:--     0
  0     0   0     0   0     0     0     0  --:--:--  0:00:04 --:--:--     0
  0 70148k   0  1381   0     0   266     0  75:00:44  0:00:05 75:00:39   636
  0 70148k   0 147262   0     0 24684     0   0:48:30  0:00:05  0:48:25 49801
  4 70148k   4  3423k   0     0 504153     0   0:02:22  0:00:06  0:02:16 888536
 32 70148k  32 22639k   0     0  2845k     0   0:00:24  0:00:07  0:00:17  4576k
 59 70148k  59 41978k   0     0  4688k     0   0:00:14  0:00:08  0:00:06 10095k
 88 70148k  88 61793k   0     0  6198k     0   0:00:11  0:00:09  0:00:02 12903k
100 70148k 100 70148k   0     0  6686k     0   0:00:10  0:00:10 --:--:-- 15470k
CPU times: user 264 ms, sys: 203 ms, total: 468 ms
Wall time: 11.7 s
# Clone GitHub repo and copy files
!git clone https://github.com/ME-ICA/ohbm-2025-multiecho.git temp_repo
!cp -r temp_repo/five-echo-dataset/* five-echo-dataset/
!rm -rf temp_repo
Cloning into 'temp_repo'...
remote: Enumerating objects: 156, done.
remote: Counting objects: 100% (156/156), done.
remote: Compressing objects:  24% (30/124)
remote: Compressing objects: 100% (124/124), done.
Receiving objects:  25% (39/156)
Receiving objects:  27% (43/156)
Receiving objects:  33% (52/156)
Receiving objects:  38% (60/156)
Receiving objects:  42% (66/156)
Receiving objects:  44% (69/156)
Receiving objects:  49% (77/156)
Receiving objects:  51% (80/156)
Receiving objects:  69% (108/156)
Receiving objects:  72% (113/156), 9.10 MiB | 18.18 MiB/s
Receiving objects:  75% (117/156), 9.10 MiB | 18.18 MiB/s
Receiving objects:  83% (130/156), 9.10 MiB | 18.18 MiB/s
Receiving objects:  87% (136/156), 9.10 MiB | 18.18 MiB/s
remote: Total 156 (delta 44), reused 141 (delta 32), pack-reused 0 (from 0)
Receiving objects: 100% (156/156), 16.37 MiB | 18.46 MiB/s, done.
Resolving deltas: 100% (44/44), done.

Run workflow on 5 echo data#

%%time
dset_dir5_out = f"{dset_dir5}tedana_processed"
files = sorted(glob(op.join(dset_dir5, 'p06*.nii.gz')))
tes = [15.4, 29.7, 44.0, 58.3, 72.6]
tedana_workflow(files, tes, 
    tree="minimal",
    fixed_seed=42,
    ica_method="robustica",
    n_robust_runs=30,
    tedpca=53,
    out_dir=dset_dir5_out,
    tedort=False
    )
Setting clustering defaults: {'min_samples': 15}
Running FastICA multiple times...
Inferring sign of components...
Clustering...
Computing centroids...
Computing Silhouettes...
Computing Iq...
CPU times: user 3h 42min 32s, sys: 42.1 s, total: 3h 43min 14s
Wall time: 10min 53s
INFO     tedana:tedana_workflow:608 Using output directory: /tmp/tmpzwzv0mi9/five-echo-dataset/tedana_processed
INFO     tedana:tedana_workflow:627 Initializing and validating component selection tree
WARNING  component_selector:validate_tree:146 Decision tree includes fields that are not used or logged ['_comment']
INFO     component_selector:__init__:333 Performing component selection with minimal_decision_tree
INFO     component_selector:__init__:334 first version of minimal decision tree
INFO     tedana:tedana_workflow:630 Loading input data: ['five-echo-dataset/p06.SBJ01_S09_Task11_e1.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e2.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e3.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e4.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e5.sm.nii.gz']
INFO     io:__init__:156 Generating figures directory: /tmp/tmpzwzv0mi9/five-echo-dataset/tedana_processed/figures
WARNING  tedana:tedana_workflow:735 Computing EPI mask from first echo using nilearn's compute_epi_mask function. Most external pipelines include more reliable masking functions. It is strongly recommended to provide an external mask, and to visually confirm that mask accurately conforms to data boundaries.
INFO     utils:make_adaptive_mask:202 Echo-wise intensity thresholds for adaptive mask: [5853.6399821  4862.62750244 4073.26911418 3377.14188232 2800.73880819]
WARNING  utils:make_adaptive_mask:231 4 voxels in user-defined mask do not have good signal. Removing voxels from mask.
INFO     tedana:tedana_workflow:774 Computing T2* map
/opt/conda/lib/python3.13/site-packages/tedana/decay.py:541: RuntimeWarning: Mean of empty slice
  rmse_map = np.nanmean(rmse, axis=1)
INFO     combine:make_optcom:192 Optimally combining data with voxel-wise T2* estimates
INFO     tedana:tedana_workflow:822 Writing optimally combined data set: /tmp/tmpzwzv0mi9/five-echo-dataset/tedana_processed/desc-optcom_bold.nii.gz
INFO     pca:tedpca:208 Computing PCA of optimally combined multi-echo data with selection criteria: 53
INFO     collect:generate_metrics:161 Calculating weight maps
INFO     collect:generate_metrics:173 Calculating parameter estimate maps for optimally combined data
INFO     collect:generate_metrics:193 Calculating z-statistic maps
INFO     collect:generate_metrics:203 Calculating F-statistic maps
INFO     collect:generate_metrics:228 Thresholding z-statistic maps
INFO     collect:generate_metrics:238 Calculating T2* F-statistic maps
INFO     collect:generate_metrics:248 Calculating S0 F-statistic maps
INFO     collect:generate_metrics:259 Counting significant voxels in T2* F-statistic maps
INFO     collect:generate_metrics:265 Counting significant voxels in S0 F-statistic maps
INFO     collect:generate_metrics:272 Thresholding optimal combination beta maps to match T2* F-statistic maps
INFO     collect:generate_metrics:281 Thresholding optimal combination beta maps to match S0 F-statistic maps
INFO     collect:generate_metrics:291 Calculating kappa and rho
INFO     collect:generate_metrics:300 Calculating variance explained
INFO     collect:generate_metrics:306 Calculating normalized variance explained
INFO     collect:generate_metrics:313 Calculating DSI between thresholded T2* F-statistic and optimal combination beta maps
INFO     collect:generate_metrics:323 Calculating DSI between thresholded S0 F-statistic and optimal combination beta maps
INFO     collect:generate_metrics:334 Calculating signal-noise t-statistics
/opt/conda/lib/python3.13/site-packages/scipy/_lib/deprecation.py:234: SmallSampleWarning: One or more sample arguments is too small; all returned values will be NaN. See documentation for sample size requirements.
  return f(*args, **kwargs)
INFO     collect:generate_metrics:368 Counting significant noise voxels from z-statistic maps
INFO     collect:generate_metrics:380 Calculating decision table score
INFO     pca:tedpca:412 Selected 53 components with 88.73% normalized variance explained using a fixed number of components and no dimensionality estimate
/opt/conda/lib/python3.13/site-packages/tedana/io.py:355: FutureWarning: Downcasting behavior in `replace` is deprecated and will be removed in a future version. To retain the old behavior, explicitly call `result.infer_objects(copy=False)`. To opt-in to the future behavior, set `pd.set_option('future.no_silent_downcasting', True)`
  deblanked = data.replace("", np.nan)
  0%|          | 0/30 [00:00<?, ?it/s]
  3%|▎         | 1/30 [00:21<10:19, 21.36s/it]
  7%|▋         | 2/30 [00:40<09:24, 20.16s/it]
 10%|█         | 3/30 [01:03<09:37, 21.39s/it]
 13%|█▎        | 4/30 [01:47<13:03, 30.14s/it]
 17%|█▋        | 5/30 [02:39<15:52, 38.10s/it]
 20%|██        | 6/30 [03:02<13:12, 33.01s/it]
 23%|██▎       | 7/30 [03:12<09:43, 25.38s/it]
 27%|██▋       | 8/30 [03:31<08:33, 23.35s/it]
 30%|███       | 9/30 [04:20<11:01, 31.51s/it]
 33%|███▎      | 10/30 [04:32<08:28, 25.40s/it]
 37%|███▋      | 11/30 [04:42<06:34, 20.77s/it]
 40%|████      | 12/30 [04:59<05:50, 19.45s/it]
 43%|████▎     | 13/30 [05:08<04:40, 16.47s/it]
 47%|████▋     | 14/30 [05:14<03:30, 13.18s/it]
 50%|█████     | 15/30 [05:18<02:38, 10.57s/it]
 53%|█████▎    | 16/30 [05:46<03:38, 15.62s/it]
 57%|█████▋    | 17/30 [05:51<02:41, 12.42s/it]
 60%|██████    | 18/30 [05:57<02:06, 10.55s/it]
 63%|██████▎   | 19/30 [06:04<01:45,  9.57s/it]
 67%|██████▋   | 20/30 [06:08<01:20,  8.03s/it]
 70%|███████   | 21/30 [06:13<01:03,  7.11s/it]
 73%|███████▎  | 22/30 [06:20<00:54,  6.83s/it]
 77%|███████▋  | 23/30 [06:27<00:48,  6.89s/it]
 80%|████████  | 24/30 [06:55<01:20, 13.39s/it]
 83%|████████▎ | 25/30 [07:01<00:55, 11.05s/it]
 87%|████████▋ | 26/30 [07:05<00:36,  9.03s/it]
 90%|█████████ | 27/30 [07:11<00:24,  8.01s/it]
 93%|█████████▎| 28/30 [07:16<00:14,  7.15s/it]
 97%|█████████▋| 29/30 [07:21<00:06,  6.69s/it]
100%|██████████| 30/30 [07:30<00:00,  7.12s/it]
100%|██████████| 30/30 [07:30<00:00, 15.00s/it]
INFO     ica:r_ica:204 For RobustICA, FastICA did not converge in 3 of 30 interations.
INFO     ica:r_ica:225 The DBSCAN clustering algorithm was used for clustering components across different runs
INFO     ica:r_ica:243 RobustICA with 30 robust runs and seed 42 was used. 39 components identified. The mean Index Quality is 0.9508220000275611.
INFO     ica:r_ica:251 The DBSCAN clustering algorithm detected outliers when clustering components for different runs. These outliers are excluded when calculating the index quality and the mixing matrix to maximise the robustness of the decomposition.
/opt/conda/lib/python3.13/site-packages/sklearn/manifold/_t_sne.py:1164: FutureWarning: 'n_iter' was renamed to 'max_iter' in version 1.5 and will be removed in 1.7.
  warnings.warn(
INFO     collect:generate_metrics:161 Calculating weight maps
INFO     collect:generate_metrics:173 Calculating parameter estimate maps for optimally combined data
INFO     collect:generate_metrics:193 Calculating z-statistic maps
INFO     collect:generate_metrics:203 Calculating F-statistic maps
INFO     collect:generate_metrics:228 Thresholding z-statistic maps
INFO     collect:generate_metrics:238 Calculating T2* F-statistic maps
INFO     collect:generate_metrics:248 Calculating S0 F-statistic maps
INFO     collect:generate_metrics:259 Counting significant voxels in T2* F-statistic maps
INFO     collect:generate_metrics:265 Counting significant voxels in S0 F-statistic maps
INFO     collect:generate_metrics:272 Thresholding optimal combination beta maps to match T2* F-statistic maps
INFO     collect:generate_metrics:281 Thresholding optimal combination beta maps to match S0 F-statistic maps
INFO     collect:generate_metrics:291 Calculating kappa and rho
INFO     collect:generate_metrics:300 Calculating variance explained
INFO     collect:generate_metrics:306 Calculating normalized variance explained
INFO     collect:generate_metrics:313 Calculating DSI between thresholded T2* F-statistic and optimal combination beta maps
INFO     collect:generate_metrics:323 Calculating DSI between thresholded S0 F-statistic and optimal combination beta maps
INFO     collect:generate_metrics:334 Calculating signal-noise t-statistics
/opt/conda/lib/python3.13/site-packages/scipy/_lib/deprecation.py:234: SmallSampleWarning: One or more sample arguments is too small; all returned values will be NaN. See documentation for sample size requirements.
  return f(*args, **kwargs)
INFO     tedana:tedana_workflow:894 Selecting components from ICA results
INFO     tedica:automatic_selection:54 Performing ICA component selection with tree: minimal
INFO     selection_nodes:manual_classify:104 Step 0: manual_classify: Set all to unclassified
INFO     selection_utils:comptable_classification_changer:293 Step 0: No components fit criterion False to change classification
INFO     selection_utils:log_decision_tree_step:447 Step 0: manual_classify applied to 39 components. 39 True -> unclassified. 0 False -> nochange.
INFO     selection_nodes:manual_classify:136 Step 0: manual_classify component classification tags are cleared
INFO     selection_utils:log_classification_counts:492 Step 0: Total component classifications: 39 unclassified
INFO     selection_nodes:dec_left_op_right:389 Step 1: left_op_right: rejected if rho>kappa, else nochange
INFO     selection_utils:log_decision_tree_step:447 Step 1: left_op_right applied to 39 components. 5 True -> rejected. 34 False -> nochange.
INFO     selection_utils:log_classification_counts:492 Step 1: Total component classifications: 5 rejected, 34 unclassified
INFO     selection_nodes:dec_left_op_right:389 Step 2: left_op_right: rejected if ['countsigFS0>countsigFT2 & countsigFT2>0'], else nochange
INFO     selection_utils:log_decision_tree_step:447 Step 2: left_op_right applied to 39 components. 2 True -> rejected. 37 False -> nochange.
INFO     selection_utils:log_classification_counts:492 Step 2: Total component classifications: 5 rejected, 34 unclassified
INFO     selection_nodes:calc_median:653 Step 3: calc_median: Median(median_varex)
INFO     selection_utils:log_decision_tree_step:459 Step 3: calc_median calculated: median_varex=0.5933764147808378
INFO     selection_utils:log_classification_counts:492 Step 3: Total component classifications: 5 rejected, 34 unclassified
INFO     selection_nodes:dec_left_op_right:389 Step 4: left_op_right: rejected if ['dice_FS0>dice_FT2 & variance explained>0.59'], else nochange
INFO     selection_utils:comptable_classification_changer:293 Step 4: No components fit criterion True to change classification
INFO     selection_utils:log_decision_tree_step:447 Step 4: left_op_right applied to 39 components. 0 True -> rejected. 39 False -> nochange.
INFO     selection_utils:log_classification_counts:492 Step 4: Total component classifications: 5 rejected, 34 unclassified
INFO     selection_nodes:dec_left_op_right:389 Step 5: left_op_right: rejected if ['0>signal-noise_t & variance explained>0.59'], else nochange
INFO     selection_utils:log_decision_tree_step:447 Step 5: left_op_right applied to 39 components. 4 True -> rejected. 35 False -> nochange.
INFO     selection_utils:log_classification_counts:492 Step 5: Total component classifications: 7 rejected, 32 unclassified
INFO     selection_nodes:calc_kappa_elbow:767 Step 6: calc_kappa_elbow: Calc Kappa Elbow
INFO     selection_utils:kappa_elbow_kundu:668 Calculating kappa elbow based on all components.
INFO     selection_utils:log_decision_tree_step:459 Step 6: calc_kappa_elbow calculated: kappa_elbow_kundu=72.01029540010363, kappa_allcomps_elbow=72.01029540010363, kappa_nonsig_elbow=None, varex_upper_p=0.7008466697871423
INFO     selection_utils:log_classification_counts:492 Step 6: Total component classifications: 7 rejected, 32 unclassified
INFO     selection_nodes:calc_rho_elbow:902 Step 7: calc_rho_elbow: Calc Rho Elbow
INFO     selection_utils:log_decision_tree_step:459 Step 7: calc_rho_elbow calculated: rho_elbow_liberal=20.05564798053916, rho_allcomps_elbow=20.05564798053916, rho_unclassified_elbow=19.716371799613857, elbow_f05=7.708647422176786
INFO     selection_utils:log_classification_counts:492 Step 7: Total component classifications: 7 rejected, 32 unclassified
INFO     selection_nodes:dec_left_op_right:389 Step 8: left_op_right: provisionalaccept if kappa>=72.01, else provisionalreject
INFO     selection_utils:log_decision_tree_step:447 Step 8: left_op_right applied to 32 components. 13 True -> provisionalaccept. 19 False -> provisionalreject.
INFO     selection_utils:log_classification_counts:492 Step 8: Total component classifications: 13 provisionalaccept, 19 provisionalreject, 7 rejected
INFO     selection_nodes:dec_left_op_right:389 Step 9: left_op_right: accepted if kappa>2*rho, else nochange
INFO     selection_utils:comptable_classification_changer:293 Step 9: No components fit criterion False to change classification
INFO     selection_utils:log_decision_tree_step:447 Step 9: left_op_right applied to 13 components. 13 True -> accepted. 0 False -> nochange.
INFO     selection_utils:log_classification_counts:492 Step 9: Total component classifications: 13 accepted, 19 provisionalreject, 7 rejected
INFO     selection_nodes:dec_left_op_right:389 Step 10: left_op_right: provisionalreject if rho>20.06, else nochange
INFO     selection_utils:log_decision_tree_step:447 Step 10: left_op_right applied to 19 components. 3 True -> provisionalreject. 16 False -> nochange.
INFO     selection_utils:log_classification_counts:492 Step 10: Total component classifications: 13 accepted, 19 provisionalreject, 7 rejected
INFO     selection_nodes:dec_variance_lessthan_thresholds:533 Step 11: variance_lt_thresholds: accepted if variance explained<0.1. All variance<1.0, else nochange
INFO     selection_utils:comptable_classification_changer:293 Step 11: No components fit criterion True to change classification
INFO     selection_utils:log_decision_tree_step:447 Step 11: variance_lt_thresholds applied to 19 components. 0 True -> accepted. 19 False -> nochange.
INFO     selection_utils:log_classification_counts:492 Step 11: Total component classifications: 13 accepted, 19 provisionalreject, 7 rejected
INFO     selection_nodes:manual_classify:104 Step 12: manual_classify: Set provisionalaccept to accepted
INFO     selection_utils:log_decision_tree_step:441 Step 12: manual_classify not applied because no remaining components were classified as provisionalaccept
INFO     selection_utils:log_classification_counts:492 Step 12: Total component classifications: 13 accepted, 19 provisionalreject, 7 rejected
INFO     selection_nodes:manual_classify:104 Step 13: manual_classify: Set ['provisionalreject', 'unclassified'] to rejected
INFO     selection_utils:comptable_classification_changer:293 Step 13: No components fit criterion False to change classification
INFO     selection_utils:log_decision_tree_step:447 Step 13: manual_classify applied to 19 components. 19 True -> rejected. 0 False -> nochange.
INFO     selection_utils:log_classification_counts:492 Step 13: Total component classifications: 13 accepted, 26 rejected
INFO     io:denoise_ts:613 Variance explained by decomposition: 94.68%
INFO     io:write_split_ts:700 Writing denoised time series: /tmp/tmpzwzv0mi9/five-echo-dataset/tedana_processed/desc-denoised_bold.nii.gz
INFO     io:writeresults:749 Writing full ICA coefficient feature set: /tmp/tmpzwzv0mi9/five-echo-dataset/tedana_processed/desc-ICA_components.nii.gz
INFO     io:writeresults:753 Writing denoised ICA coefficient feature set: /tmp/tmpzwzv0mi9/five-echo-dataset/tedana_processed/desc-ICAAccepted_components.nii.gz
INFO     io:writeresults:759 Writing Z-normalized spatial component maps: /tmp/tmpzwzv0mi9/five-echo-dataset/tedana_processed/desc-ICAAccepted_stat-z_components.nii.gz
INFO     tedana:tedana_workflow:1116 Making figures folder with static component maps and timecourse plots.
INFO     io:denoise_ts:613 Variance explained by decomposition: 94.68%
/opt/conda/lib/python3.13/site-packages/tedana/io.py:904: UserWarning: Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32.
  nii = new_img_like(ref_img, newdata, affine=affine, copy_header=copy_header)
/opt/conda/lib/python3.13/site-packages/tedana/io.py:904: UserWarning: Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32.
  nii = new_img_like(ref_img, newdata, affine=affine, copy_header=copy_header)
/opt/conda/lib/python3.13/site-packages/tedana/io.py:904: UserWarning: Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32.
  nii = new_img_like(ref_img, newdata, affine=affine, copy_header=copy_header)
/opt/conda/lib/python3.13/site-packages/nilearn/plotting/img_plotting.py:1416: UserWarning: Non-finite values detected. These values will be replaced with zeros.
  safe_get_data(stat_map_img, ensure_finite=True),
INFO     tedana:tedana_workflow:1172 Generating dynamic report
INFO     html_report:_update_template_bokeh:164 Checking for adaptive mask: adaptive_mask.svg, exists: True
INFO     html_report:_update_template_bokeh:204 T2* files exist: True
INFO     html_report:_update_template_bokeh:205 S0 files exist: True
INFO     html_report:_update_template_bokeh:206 RMSE files exist: True
INFO     html_report:_update_template_bokeh:212 External regressors exist: False
INFO     tedana:tedana_workflow:1175 Workflow completed
INFO     utils:log_newsletter_info:705 Don't forget to subscribe to the tedana newsletter for updates! This is a very low volume email list.
INFO     utils:log_newsletter_info:709 https://groups.google.com/g/tedana-newsletter

Tedana report of 5 echo data#

You can explore an example of an interactive tedana report here.

The tedana report for the current dataset was generated at the following location:

# this is the path of the tedana report
url = str(os.path.abspath(dset_dir5_out + '/tedana_report.html'))
print(url)
/tmp/tmpzwzv0mi9/five-echo-dataset/tedana_processed/tedana_report.html

To properly view the interactive tedana html report with all figures displayed correctly, right-click the file at the path shown above and select Open in New Browser Tab.

Below, selected components from the generated tedana report are visualized.

Carpet plot#

from IPython.display import Image
from IPython.core.display import SVG

SVG(filename='five-echo-dataset/tedana_processed/figures/carpet_optcom.svg')
_images/c659733e9303e8e35ae0dd02721f251e0cc8e6089094c8bc8002acf92ad1c341.svg

Adaptive Mask#

SVG(filename='five-echo-dataset/tedana_processed/figures/adaptive_mask.svg')
_images/c23c38ce09c38e004e2eac60d547928fc220a6a6a81a08ebc50ff46af02dbcf3.svg

T2*#

t2star_brain = SVG(filename='five-echo-dataset/tedana_processed/figures/t2star_brain.svg')
t2star_histogram = SVG(filename='five-echo-dataset/tedana_processed/figures/t2star_histogram.svg')

display(t2star_brain, t2star_histogram)
_images/8f12b2cc8208b3035a08f029be290a43dc2432f15f1af63e700c27981c6b6db3.svg _images/21e0a8f8a69568aa7fd2cca3d777fe8fb8bb86080196a88e8b2c3d381e89b8e4.svg

S0#

s0_brain = SVG(filename='five-echo-dataset/tedana_processed/figures/s0_brain.svg')
s0_histogram = SVG(filename='five-echo-dataset/tedana_processed/figures/s0_histogram.svg')

display(s0_brain, s0_histogram)
_images/d8a55c81da37f70a7510a85f3b2079e06569affde87288f7fe4abf421910469d.svg _images/ee624089f94d84c917ef5b90185f143294e3969667c9698ff7ff5123b0189501.svg

T2* and S0 model fit (RMSE). (Scaled between 2nd and 98th percentiles)#

rmse_brain = SVG(filename='five-echo-dataset/tedana_processed/figures/rmse_brain.svg')
rmse_timeseries = SVG(filename='five-echo-dataset/tedana_processed/figures/rmse_timeseries.svg')

display(rmse_brain, rmse_timeseries)
_images/6ae06d461cc79ea334e335c000f194b6936b7a3b6d9138480fb35c6c74114b5f.svg _images/879636031dacafe0c57a3d84f21f07f90ee28bca5874c5fdf3c0942279b208fe.svg

Time series generation using AFNI commands#

%%bash
# A rough CSF mask for demonstration purposes
# Segment the CSF, erode by 1 voxel, #
# and retain voxels containing 75% of the CSF mask when downsampled to EPI space
cd five-echo-dataset

3dSeg -anat SBJ01_Anatomy.nii.gz -mask AUTO \
    -classes 'CSF ; GM ; WM'  \
    -bias_classes 'GM ; WM' \
    -bias_fwhm 25 -mixfrac UNI -main_N 5 \
    -blur_meth BFT
3dcalc -a ./Segsy/Classes+tlrc -expr 'equals(a, 1)' -prefix CSF_anatresolution.nii.gz
3dmask_tool -input CSF_anatresolution.nii.gz \
    -prefix CSF_eroded.nii.gz \
    -dilate_result -1 -fill_holes 
3dfractionize -template p06.SBJ01_S09_Task11_e3.sm.nii.gz \
    -prefix CSF_mask.nii.gz \
    -input CSF_eroded.nii.gz \
    -clip 0.75

# make CSF principal components
3dpc -mask CSF_mask.nii.gz -pcsave 3  \
    -prefix CSF_timeseries \
    ./tedana_processed/desc-optcom_bold.nii.gz

# Combine all external regressors into a single file
external_regress_header="mot_x\tmot_y\tmot_z\tmot_pitch\tmot_roll\tmot_yaw\t"\
"mot_dx\tmot_dy\tmot_dz\tmot_dpitch\tmot_droll\tmot_dyaw\tcsf1\tcsf2\tcsf3\t"\
"signal_checkerboard"

1dcat -tsvout \
    SBJ01_S09_Task11_e2_Motion.demean.1D \
    SBJ01_S09_Task11_e2_Motion.demean.der.1D \
    CSF_timeseries0?.1D \
    block_task_response.1D \
    > tmp.tsv

# Convert spaces to tabs and skip the header line from 1dcat
tail -n +2 tmp.tsv | tr ' ' '\t' > tmp_clean.tsv

# Add header
(echo -e "$external_regress_header"; cat tmp_clean.tsv) > external_regressors.tsv

# Clean up
rm tmp.tsv tmp_clean.tsv
bash: line 6: 3dSeg: command not found
bash: line 11: 3dcalc: command not found
bash: line 12: 3dmask_tool: command not found
bash: line 15: 3dfractionize: command not found
bash: line 21: 3dpc: command not found
bash: line 30: 1dcat: command not found

Run workflow on 5 echo data using existing mixing matrix and external regressors#

%%time
dset_dir5_extern_out = f"{dset_dir5}tedana_external_regress_processed"
files = sorted(glob(op.join(dset_dir5, 'p06*.nii.gz')))
tes = [15.4, 29.7, 44.0, 58.3, 72.6]
tedana_workflow(files, tes, 
    tree="demo_external_regressors_motion_task_models",
    external_regressors=op.join(dset_dir5,"external_regressors.tsv"),
    mixing_file=op.join(dset_dir5,"tedana_processed", "desc-ICA_mixing.tsv"),
    out_dir=dset_dir5_extern_out
    )
CPU times: user 2.49 s, sys: 590 ms, total: 3.08 s
Wall time: 3.07 s
INFO     tedana:tedana_workflow:608 Using output directory: /tmp/tmpzwzv0mi9/five-echo-dataset/tedana_external_regress_processed
INFO     tedana:tedana_workflow:627 Initializing and validating component selection tree
INFO     component_selector:__init__:333 Performing component selection with demo_external_regressors_motion_task_models
INFO     component_selector:__init__:334 Demonstration based on the minimal decision tree that uses partial F stats on a model with multiple external regressors divided by category and task regressors to bias towards keeping.
INFO     tedana:tedana_workflow:630 Loading input data: ['five-echo-dataset/p06.SBJ01_S09_Task11_e1.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e2.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e3.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e4.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e5.sm.nii.gz']
---------------------------------------------------------------------------
RegressError                              Traceback (most recent call last)
Cell In[14], line 1
----> 1 get_ipython().run_cell_magic('time', '', 'dset_dir5_extern_out = f"{dset_dir5}tedana_external_regress_processed"\nfiles = sorted(glob(op.join(dset_dir5, \'p06*.nii.gz\')))\ntes = [15.4, 29.7, 44.0, 58.3, 72.6]\ntedana_workflow(files, tes, \n    tree="demo_external_regressors_motion_task_models",\n    external_regressors=op.join(dset_dir5,"external_regressors.tsv"),\n    mixing_file=op.join(dset_dir5,"tedana_processed", "desc-ICA_mixing.tsv"),\n    out_dir=dset_dir5_extern_out\n    )\n')

File /opt/conda/lib/python3.13/site-packages/IPython/core/interactiveshell.py:2565, in InteractiveShell.run_cell_magic(self, magic_name, line, cell)
   2563 with self.builtin_trap:
   2564     args = (magic_arg_s, cell)
-> 2565     result = fn(*args, **kwargs)
   2567 # The code below prevents the output from being displayed
   2568 # when using magics with decorator @output_can_be_silenced
   2569 # when the last Python token in the expression is a ';'.
   2570 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):

File /opt/conda/lib/python3.13/site-packages/IPython/core/magics/execution.py:1452, in ExecutionMagics.time(self, line, cell, local_ns)
   1450 if interrupt_occured:
   1451     if exit_on_interrupt and captured_exception:
-> 1452         raise captured_exception
   1453     return
   1454 return out

File /opt/conda/lib/python3.13/site-packages/IPython/core/magics/execution.py:1421, in ExecutionMagics.time(self, line, cell, local_ns)
   1419     if expr_val is not None:
   1420         code_2 = self.shell.compile(expr_val, source, 'eval')
-> 1421         out = eval(code_2, glob, local_ns)
   1422 except KeyboardInterrupt as e:
   1423     captured_exception = e

File <timed exec>:4

File /opt/conda/lib/python3.13/site-packages/tedana/workflows/tedana.py:641, in tedana_workflow(data, tes, out_dir, mask, convention, prefix, dummy_scans, masktype, fittype, combmode, n_independent_echos, tree, external_regressors, ica_method, n_robust_runs, tedpca, fixed_seed, maxit, maxrestart, tedort, gscontrol, no_reports, png_cmap, verbose, low_mem, debug, quiet, overwrite, t2smap, mixing_file, tedana_command)
    633 # Load external regressors if provided
    634 # Decided to do the validation here so that, if there are issues, an error
    635 #  will be raised before PCA/ICA
    636 if (
    637     "external_regressor_config" in set(selector.tree.keys())
    638     and selector.tree["external_regressor_config"] is not None
    639 ):
    640     external_regressors, selector.tree["external_regressor_config"] = (
--> 641         metrics.external.load_validate_external_regressors(
    642             external_regressors=external_regressors,
    643             external_regressor_config=selector.tree["external_regressor_config"],
    644             n_vols=data_cat.shape[2],
    645             dummy_scans=dummy_scans,
    646         )
    647     )
    649 io_generator = io.OutputGenerator(
    650     ref_img,
    651     convention=convention,
   (...)    656     verbose=verbose,
    657 )
    659 # Record inputs to OutputGenerator
    660 # TODO: turn this into an IOManager since this isn't really output

File /opt/conda/lib/python3.13/site-packages/tedana/metrics/external.py:60, in load_validate_external_regressors(external_regressors, external_regressor_config, n_vols, dummy_scans)
     57 except FileNotFoundError:
     58     raise ValueError(f"Cannot load tsv file with external regressors: {external_regressors}")
---> 60 external_regressor_config = validate_extern_regress(
     61     external_regressors=external_regressors,
     62     external_regressor_config=external_regressor_config,
     63     n_vols=n_vols,
     64     dummy_scans=dummy_scans,
     65 )
     67 return external_regressors, external_regressor_config

File /opt/conda/lib/python3.13/site-packages/tedana/metrics/external.py:238, in validate_extern_regress(external_regressors, external_regressor_config, n_vols, dummy_scans)
    231         err_msg += (
    232             f"External regressors have {len(external_regressors.index)} timepoints "
    233             f"while fMRI data have {n_vols} timepoints, of which {dummy_scans} are dummy "
    234             "scans.\n"
    235         )
    237 if err_msg:
--> 238     raise RegressError(err_msg)
    240 return external_regressor_config

RegressError: External regressors have 0 timepoints while fMRI data have 160 timepoints, of which 0 are dummy scans.

Components of the tedana report of 5 echo data with external regressors#

# Here is the path to the second TEDANA report
url = str(os.path.abspath(dset_dir5_extern_out + '/tedana_report.html'))
print(url)
/home/jovyan/Git_repositories/example-notebooks/books/functional_imaging/five-echo-dataset/tedana_external_regress_processed/tedana_report.html

To view this report with all figures displayed properly, right-click the file path and choose Open in New Browser Tab.

Carpet plot#

SVG(filename='five-echo-dataset/tedana_external_regress_processed/figures/carpet_optcom.svg')
_images/a3e1f9ca9ccf9e6f9a647d41bcdcba2e26e599a9a16dd7db16581e9e33fcb330.svg

Adaptive Mask#

SVG(filename='five-echo-dataset/tedana_external_regress_processed/figures/adaptive_mask.svg')
_images/f5feb01dac820c2741064223785d215e291e3ae9455c23255f56c34f2baf988b.svg

T2*#

t2star_brain = SVG(filename='five-echo-dataset/tedana_external_regress_processed/figures/t2star_brain.svg')
t2star_histogram = SVG(filename='five-echo-dataset/tedana_external_regress_processed/figures/t2star_histogram.svg')

display(t2star_brain, t2star_histogram)
_images/2f0cac669c77b4748a64d7b9084893510f9d3d6089862c25bc9f0817b8d97f20.svg _images/3ff125ab89038ad30f060bad9f0f37b63af59bf89db3e987f7864efd87d9b77d.svg

S0#

s0_brain = SVG(filename='five-echo-dataset/tedana_external_regress_processed/figures/s0_brain.svg')
s0_histogram = SVG(filename='five-echo-dataset/tedana_external_regress_processed/figures/s0_histogram.svg')

display(s0_brain, s0_histogram)
_images/414fcbd9cfa51902fc2f5357e0a536c4292d90477bc09404bd1dec3d628d4fd9.svg _images/eb768fdcbeeb020a56317541f0ce7657ee04b31058ebf57e15c6b529eb519126.svg

T2* and S0 model fit (RMSE). (Scaled between 2nd and 98th percentiles)#

rmse_brain = SVG(filename='five-echo-dataset/tedana_external_regress_processed/figures/rmse_brain.svg')
rmse_timeseries = SVG(filename='five-echo-dataset/tedana_external_regress_processed/figures/rmse_timeseries.svg')

display(rmse_brain, rmse_timeseries)
_images/9b54ec11210c2909e606ab0c7c445b3f6c8bc257fb14fcb40245db950eb71d74.svg _images/b06d30398f5d158cb9fc48111849d1108a072ced6ce38ff087547cf63997b2de.svg

External regressors#

SVG(filename='five-echo-dataset/tedana_external_regress_processed/figures/confound_correlations.svg')
_images/e0c3a92b630a91d066675a39c14c6b302b5bd0df5036cd3e8bb4110bbde8f84b.svg

Dependencies in Jupyter/Python#

  • Using the package watermark to document system environment and software versions used in this notebook

%load_ext watermark

%watermark
%watermark --iversions
Last updated: 2025-11-04T00:39:51.384745+00:00

Python implementation: CPython
Python version       : 3.11.6
IPython version      : 8.16.1

Compiler    : GCC 12.3.0
OS          : Linux
Release     : 5.4.0-204-generic
Machine     : x86_64
Processor   : x86_64
CPU cores   : 32
Architecture: 64bit

tedana : 25.0.1
IPython: 8.16.1