Healthy Volunteer Protocol Upload Process: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
=HV Bids processing requires= |
|||
pyctf |
pyctf |
||
General Utilities to interface with CTF data using python. Also provides bids processing utilities. |
General Utilities to interface with CTF data using python. Also provides bids processing utilities. |
||
Line 21: | Line 21: | ||
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/install_instructs/index.html |
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/install_instructs/index.html |
||
==Bids Validator== |
|||
https://github.com/bids-standard/bids-validator |
https://github.com/bids-standard/bids-validator |
||
==BIDS format / OpenNeuro== |
|||
All data is be converted to BIDS format and uploaded to OpenNeuro as an open access dataset. |
All data is be converted to BIDS format and uploaded to OpenNeuro as an open access dataset. |
||
Data triggers are cleaned using several routines listed below. These have been used to realign stimulus triggers to optical onset of the projector. Datasets that have logfiles have been merged |
Data triggers are cleaned using several routines listed below. These have been used to realign stimulus triggers to optical onset of the projector. Datasets that have logfiles have been merged |
||
Line 30: | Line 30: | ||
==Processing on Biowulf== |
|||
To process the scripts on biowulf: |
To process the scripts on biowulf: |
||
pyctf & hv_proc must be in your conda path (if necessary add filepaths to a .pth file in the conda site-packages folder) |
pyctf & hv_proc must be in your conda path (if necessary add filepaths to a .pth file in the conda site-packages folder) |
||
module load ctf |
module load ctf |
||
module load afni |
module load afni |
||
==Processing Steps== |
|||
Acquire data |
|||
Copy raw data to Biowulf: $MEG_folder/NIMH_HV/MEG |
|||
Copy data to bids_staging: $MEG_folder/NIMH_HV/bids_staging |
|||
sinteractive --mem=8G --cpus-per-task=4 #Start small sized server |
|||
conda activate hv_proc |
|||
module load afni |
|||
module load ctf #Required for addmarks |
|||
process_hv_data -subject_folder $Subject_Folder #Loops over all task processing steps and creates QA documents |
|||
==Process_hv_data.py== |
|||
Loop over the following scripts: |
|||
CONVERT MRI FIDUCIALS TO TAGs >> get code name |
|||
pyctf.bids.extract_tags $subjid_anat+orig.BRIK > tagfile |
|||
Process Tasks and assert ouputs match expected: |
|||
airpuff_processing.py |
|||
oddball_processing.py |
|||
hariri_processing.py |
|||
sternberg_processing.py |
|||
gonogo_processing.py |
|||
Calculate the noise level |
|||
Scrub path info and history text files from the datasets to remove date and identifiers |
|||
====Trigger Coding==== |
|||
===Oddball Task=== |
|||
Description: 3 sound stimuli presented to the participant. The participant attends to the "standard" tone stimuli (210 epochs) and is required to respond to the "target" tone stimuli (45 epochs) that are intermixed with the standard tone. Additionally a broadband noise stimuli is presented as a "distractor" stimuli (45 epochs). |
|||
UADC003 - Left Ear auditory stimuli |
|||
UADC004 - Right Ear auditory stimuli |
|||
UADC005 - Participant response |
|||
UPPT001 - Stimuli Coding (standard:1, target:2, distractor:3) |
|||
===Hariri Hammer=== |
|||
Emotional processing task contrasting happy and sad faces. Shapes are used as neutral baselines. An initial "top stim" will be shown followed by a fixation crosshair. The subject is supposed to respond during the "choice stim" by pressing the left or right response button that corresponds to the face matching the "top stim" presentation. |
|||
UADC006 - Left response |
|||
UADC007 - Right response |
|||
UADC016 - Projector onset |
|||
UPPT001 - Parallel port stimuli |
|||
Temporal Coding (UPPT001) |
|||
Top Stim |
|||
Choice Stim |
|||
Response Value |
|||
Stimulus Trigger codes (UPPT001) |
|||
diamond,0x1 |
|||
moon,0x2 |
|||
oval,0x3 |
|||
plus,0x4 |
|||
rectangle,0x5 |
|||
trapezoid,0x6 |
|||
triangle,0x7 |
|||
hapmale,0xB |
|||
hapfem,0xC |
|||
sadmale,0x15 |
|||
sadfem,0x16 |
|||
Processing Scripts: |
|||
Files have been compiled into a folder hv_proc |
|||
This contains script interfaces into the megblocks package |
Revision as of 10:21, 12 May 2020
HV Bids processing requires
pyctf General Utilities to interface with CTF data using python. Also provides bids processing utilities. https://megcore.nih.gov/index.php/Pyctf - A more recent update will be coming available soon
hv_proc Python scripts to extract and mark HV specific stimuli and validate trigger/response timing and data QA. **Open Access in development
NIH MEG Bids processing Routines to convert the CTF MEG data into BIDs format using mne_bids and bids_validator https://github.com/nih-fmrif/meg_bids/blob/master/1_mne_bids_extractor.ipynb
mne_bids https://mne.tools/mne-bids/stable/index.html pip install -U mne pip install -U mne-bids
Afni Required for extracting HPI coil locations. https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/install_instructs/index.html
Bids Validator
https://github.com/bids-standard/bids-validator
BIDS format / OpenNeuro
All data is be converted to BIDS format and uploaded to OpenNeuro as an open access dataset. Data triggers are cleaned using several routines listed below. These have been used to realign stimulus triggers to optical onset of the projector. Datasets that have logfiles have been merged with the trigger data to label triggers and responses.
Processing on Biowulf
To process the scripts on biowulf: pyctf & hv_proc must be in your conda path (if necessary add filepaths to a .pth file in the conda site-packages folder) module load ctf module load afni
Processing Steps
Acquire data Copy raw data to Biowulf: $MEG_folder/NIMH_HV/MEG Copy data to bids_staging: $MEG_folder/NIMH_HV/bids_staging
sinteractive --mem=8G --cpus-per-task=4 #Start small sized server conda activate hv_proc module load afni module load ctf #Required for addmarks
process_hv_data -subject_folder $Subject_Folder #Loops over all task processing steps and creates QA documents
Process_hv_data.py
Loop over the following scripts: CONVERT MRI FIDUCIALS TO TAGs >> get code name pyctf.bids.extract_tags $subjid_anat+orig.BRIK > tagfile
Process Tasks and assert ouputs match expected: airpuff_processing.py oddball_processing.py hariri_processing.py sternberg_processing.py gonogo_processing.py
Calculate the noise level Scrub path info and history text files from the datasets to remove date and identifiers
Trigger Coding
Oddball Task
Description: 3 sound stimuli presented to the participant. The participant attends to the "standard" tone stimuli (210 epochs) and is required to respond to the "target" tone stimuli (45 epochs) that are intermixed with the standard tone. Additionally a broadband noise stimuli is presented as a "distractor" stimuli (45 epochs).
UADC003 - Left Ear auditory stimuli UADC004 - Right Ear auditory stimuli UADC005 - Participant response UPPT001 - Stimuli Coding (standard:1, target:2, distractor:3)
Hariri Hammer
Emotional processing task contrasting happy and sad faces. Shapes are used as neutral baselines. An initial "top stim" will be shown followed by a fixation crosshair. The subject is supposed to respond during the "choice stim" by pressing the left or right response button that corresponds to the face matching the "top stim" presentation.
UADC006 - Left response UADC007 - Right response UADC016 - Projector onset UPPT001 - Parallel port stimuli
Temporal Coding (UPPT001)
Top Stim Choice Stim Response Value
Stimulus Trigger codes (UPPT001)
diamond,0x1 moon,0x2 oval,0x3 plus,0x4 rectangle,0x5 trapezoid,0x6 triangle,0x7 hapmale,0xB hapfem,0xC sadmale,0x15 sadfem,0x16
Processing Scripts:
Files have been compiled into a folder hv_proc This contains script interfaces into the megblocks package