Tuesday October 13 2020, 9:00-10:00 AM EDT
The video recording of the session is below.
Use the commenting tool below to discuss the presentations given in this session. Please register as a member to be able to ask and answer questions https://fnirs.org/account/. You can register for free for this web content.
University of Campinas, Brazil
|Towards fNIRS reproducibility at the intra- and inter-subject levels||
Yale School of Medicine, USA
|Comparison of short-channel separation and spatial domain filtering for removal of systemic components in fNIRS||
Virginia Tech, USA
|Temporal network dynamics in the prefrontal cortex during concept generation for engineering design||
University of Zurich, Switzerland
|Hyperscanning and systemic physiology||
Holon Institute of Technology, Israel
|Artifact detection based on statistical properties||
|Fuzzy sets based analysis of multimodal EEG-fNIRS images||
New Jersey Institute of Technology, USA
|Cortical mechanisms of auditory masking||
|TuP1||Panel Discussion||Data Analysis and Algorithms Moderators Sabrina Brigadoi & Adam Lliebert||
Return to main chat page https://fnirs.org/conferences/fnirs-datablitz-2020-chat/
How your 3-day variability in fNIRS connectivity compares with fMRI?
For functional connectivity, fNIRS variability is definitely higher at the inter- and intra-subject levels. But I suspect this happens because we used ROIs for fMRI analysis, and the averaging process with fMRI voxels may smooth out some of the random variability…
Thanks for the presentation. Important topic. Concerning the differences in brain activity with respect to time of day, did you see a specific pattern? Is, for example, the magnitudes of the responses lower in the morning?
Yes! Not only the magnitudes are lower but the size of the response is smaller in the morning. However, we found more variability in the measurements taken at the same time in different days than different times at the same day – which was kind of surprising to me because of the circadian cycle…
Did you observed the same PFC network models in deoxyhemoglobin?
How long were the recordings, and therefore each decile (10% data length)?
1. What’s exactly the task have to perform?
2. How do you control for the different phases (of the design process?)
Great work. The group data is reproducible.
Did you happen perform comparison between subjects?
Also when comparing averaged data from groups, did you happen to find some “not good” channels/subjects and removing them for further analysis?
Great questions. We did compare between subjects and the spatial information does decrease the variability between subjects as well. You can check the effect on https://www.frontiersin.org/articles/10.3389/fnins.2020.00746/full
Comparing two or more groups is a trickier question, and we have been working on this yet. But, yes, we check every channel of every subject and discard the bad channels (or subjects) when needed.
Nice talk. Could you please elaborate how to choose threshold level for the undirected graph generated from FC of temporal fNIRS data.
thanks for the talk :)
how do you explain reproducibility of results at the group level, but not at the subject level? (assuming the same spatial procedures are used for the group and the subject level)
This is probably related to the root that causes the main variability in the data. When they are random and do not have a systematic trend they will tend to average out across different subjects.
great talk. When you check the repeatability of fNIRS signal at group level, how many subjects were recruited . Thank you.
It depended on the experiment. For motor tasks, we went as low as 5 subjects to explicitly show one can achieve high reproducibility for the group even with low subjects (as long as you account for confounders such as systemic physiology, motion artifacts, extra-cortical contributions and combined HbO/HbR analysis). If you do all these pre-processing steps, you will have good reproducibility at the group level.
For cognitive tasks such as reading the changes are smaller and more heterogeneous by nature, and you may need more subjects. In our data, we did ~25 subjects, and have not investigated the minimum number of subjects needed to have good reproducibility.
can you describe with more detail the spatial filter approach? Is this just global signal regression?
Very useful technique! Are PC1s common across subjects? Do you have to optimize it within subject?
Nice Talk! I am curious about how differences in probe density across the head impact network metrics. I suppose that a region with more channels close to each other would have a higher tendency to have channels with more connections. Is it correct?
How is the threshold value found? Manually or automatically?
Right now it is done manually. We will test it on other data sources
I see. Maybe it’s possible to use the statistical properties of the data in the selected reference time frame to determine the threshold automatically.
we do it on a subset of the data and apply to the rest of the subjects. But we only have one database right now. So the statistics of the data is similar.
We are comparing the statistical properties in the reference section to the rest of the sections, but at the moment it is hard to say if the ratio is constant over different datasets. Within our dataset, the threshold is constant for all records, thus being a predefined value for detection. We are planning to test in on other datasets to see if it is consistent.
Can you give us the link to a publication of your method? It is very interesting
we are preparing a manuscript. will be submitted soon
Please send me your email : firstname.lastname@example.org and I can send it when accepted
Thank you for your talk!
Rejecting the skewed outliers seemed simple but effective. Would this still work for the ‘step/baseline change’ type of artifacts? The signal distribution will look bi-modal. Thank you!
we do the statistics on short windows. So the steps will be “outliers”. Hope I understood your question correctly
Thank you! It makes sense that it would be depending on the window size.
You are correct. It is important to choose the correct window size for sample variance step, but in general it is enough to assure that multiple instances of the periodic process are in the window.
Wonderful talk Antje– can you say more about the direction of your effect? why would poorer performance be linked to greater oxygenation? is it compensation? have you seen that direction of effect in other work?
Thanks, Lauren! Our hunch is that the poorer performers may need more resources to gate the signal from thalamus to cortex, and this may be what people describe as “listening effort.” We comment on this in the discussion of our work. https://www.biorxiv.org/content/10.1101/2020.08.21.261222v1.abstract
Great efforts! How do you optimize the system? Do you tune it to fNIRS or to EEG or to balance them?
Thanks, Pepe! Ideally it should be compensated. Currently we have 5 entries from the different EEG bands and only 2 from fNIRS from HbO2 and HbR, so right now there is likley some skew towards more EEG. We are still testing different projections. Will see…
Thank you for your nice talk.
As shown in the slides, the biosignals were observed from the wavelet coherence. so, the wavelet coherence is just used for noise detection? or has the other finding (e.g., the brain activations) in the time&frequency band?
We just looked at the resting state, so no stimulations. We used the wavelet coherence to determine the coherence between the two subjects. Not sure if I answered your question adequately.
Interesting idea. How do you account for the “fuzziness” of the images? Do you define a probability function for each channel?
In 2 different manners. First, brain acitivity is no longer assumed dychotomic -active vs non-active- and instead is acknowledged to be fuzzy -continuous degree/strength of activity-. And second, by accepting that since EEG and fNIRS are different imaging modalities measuring related yet different things, they provide as with different point of views of the same construct -uncertainty arising from having different “raters” of the same phenomenon.
How do you explain the greater variability in fNIRS data compared to fMRI data?
It is probably related to both the size of the measurement unit and the nature of what is embedded in the data. High temporal resolution in fNIRS makes it more sensitive to systemic oscillations and we can see these changes. In fMRI this is “hidden” due to lower temporal resolution (it causes the aliasing of the data but it is not “visible”). The smaller “unit” of measurement in fNIRS is one single channel that has very little overlaps between them (at least in low-density probes). For fMRI there are several voxels, and each one carries different information. Thus random noise is cancelled out when averaging. On top of all that, fNIRS data is sensitive to extra-cortical contributions and, as much as short-channel regression can remove these contributions, it is not 100% effective.
But keep in mind we were always investigated intra-subject & inter-subject variability. Group results are quite robust with fNIRS.
Thanks for your talk, really exciting results shown on the last slide! I was wondering how the multipixel detector was used to make a tomographic image in this case? Does each fibre feed to an individual pixel on the array and if so, how does this translate to a 3D image?