2021 SfNIRS Virtual Meeting Q&A: Hardware special session

You must be logged in to view this content.

14 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Frédéric Lange
Frédéric Lange (@fredlange)
5 days ago

Hi Everyone, just posting a few questions that where not addressed in the Q&A:

Hi, I would like to know what hardware development companies are doing to ensure diverse representation in development of fNIRS technologies and techniques. I have found that the biggest barrier for quality data collection is due to lack of consideration of various hair textures and hair styles in the design of fNIRS caps. So how are you ensuring that there are diverse perspectives represented in the R&D stage and what initiatives do you have ongoing for expanding the hair styles and textures that are accomodated in NIRS cap and optode design?

Cooper
Cooper (@cooper)
Reply to  Frédéric Lange
4 days ago

I think this is a great question and one that should have got more attention. I can’t speak for any company any more, but I certainly agree there needs to be more effort on this, as there is starting to be in the EEG world.

The only way to make companies pay attention to this issue is for the community to develop a standard. We need to somehow build a standard set of hair phantoms that are reproducible, to allow meaningful comparison of different devices, and then pester the companies until they publish system performance in those phantoms.

However, while improvements can definitely be made, as we said in the session, I think the difficult fact is that there will always be hair types that effectively preclude optical methods.

Ryan FIeld
Ryan FIeld (@ryanfield)
Reply to  Cooper
3 days ago

This is an interesting idea. Do you know of anyone producing hair phantoms that could be used as a standard, similar to MEDPHOT?

Cooper
Cooper (@cooper)
Reply to  Ryan FIeld
3 days ago

No. I do remember some people attempting this in the past, but it is very hard to do in a reproducible way. Something akin to the MEDPHOT is definitely what is needed.

You should hire a wigmaker at Kernel!

Frédéric Lange
Frédéric Lange (@fredlange)
5 days ago

Not addressed in the Q&A:

How do you tackle with different subjects who do not have MRI scans. Do you have to have some kind of brain atlas? What types of the atlas do you prefer?

Cooper
Cooper (@cooper)
Reply to  Frédéric Lange
4 days ago

Yes- generally we will use an atlas template. There are plenty of adult models to choose from. We tend to use models from MNI, some of which we have released meshed versions of at ucl.ac.uk/dot-hub.

Ryan FIeld
Ryan FIeld (@ryanfield)
Reply to  Frédéric Lange
3 days ago

We also use an atlas template for our 3D reconstructions.

Frédéric Lange
Frédéric Lange (@fredlange)
5 days ago

Not addressed in the Q&A:

In terms of fNIRS/DCS (“optical techniques”) competing with fMRI, how much limitation comes simply from the inability of light to penetrate to the deeper regions of the brain?

CarpStefan
CarpStefan (@carpstefan)
Reply to  Frédéric Lange
4 days ago

This was discussed live – briefly, it depends on the applications – certain applications that require probing the activity of deeper regions would be difficult, but a lot of neuroscience relies on cortical activity which can be probed with optical methods

Frédéric Lange
Frédéric Lange (@fredlange)
5 days ago

Not addressed in the Q&A:

Were you able to acquire any fast optical signals just like observed in EEG evoked responses in the range of milliseconds?

Cooper
Cooper (@cooper)
Reply to  Frédéric Lange
4 days ago

Not sure who this is addressed to, but I expect the answer from all of us is ‘no’. There is a general consensus that the FOS is not readily observable in humans (at least not yet), and that includes with DCS. David Boas just published a paper on this question and concluded the SNR one would need is not in reach any time soon.

Frédéric Lange
Frédéric Lange (@fredlange)
5 days ago

Not addressed in the Q&A:

Do you think at some point we could have a depth resolution that we can separate activity between superficial and deep layers in the cortex?

Cooper
Cooper (@cooper)
Reply to  Frédéric Lange
4 days ago

Not with a purely optical method I don’t think. Even with time-gated TD methods the sensitivity distributions overlap heavily in depth, so this is a very challenging thing to do. It might be possible with some of the theoretical multimodal approaches that use e.g. ultrasound to spatially tag detected light an minimize the effects of scatter, but those techniques haven’t been meaningfully demonstrated in humans.

ReRebecca
ReRebecca (@rerebecca)
Reply to  Cooper
4 days ago

I totally agree, because of the diffusion (i.e. scattering) I don’t think we will be able to go > 3 cm depth in tissues like the head where we had also the scalp.