https://megcore.nih.gov/index.php?title=Special:NewPages&feed=atom&hidebots=1&hideredirs=1&limit=50&offset=&namespace=0&username=&tagfilter=&size-mode=max&size=0MEG Core - New pages [en]2024-03-29T05:11:38ZFrom MEG CoreMediaWiki 1.41.0https://megcore.nih.gov/index.php/Temporary_ScheduleTemporary Schedule2024-02-29T19:46:28Z<p>Amnamyst: </p>
<hr />
<div>The MEG Calendar is unfortunately still under reconstruction, with resolution date TBD. Please look below for how to view the calendar and request MEG lab time.<br />
<br />
== Outlook Calendar Link -- View Only==<br />
<br />
You can preview the current lab availability here. If you have any changes to your scanning schedule, please let us know ASAP.<br />
<br />
[https://outlook.office365.com/calendar/published/227b024e9d674fe2bdeac99acbc06fd2@nih.gov/409902d1eff54d6aa68326871c9941e615254852603659514388/calendar.html Temporary MEG Lab Calendar]<br />
<br />
== Requesting Scan Time ==<br />
When submitting requests, use the following format :<br />
<br />
Subject: "PI/Reseacher"<br />
Date: Thursday, March 28, 2024<br />
Start Time: 08:30<br />
End Time: 10:30 <br />
<br />
Please email to Anna Namyst and Amaia Benitez, and copy meglab@kurage.nimh.nih.gov on your email.<br />
<br />
<br />
'''''<span style="color:#009999"> THANK YOU FOR YOUR PATIENCE!! </span>'''''<br />
<br />
<!-- ==Current Week:==<br />
[[File:CurrentWeek.png | center | 1200x800px]]<br />
==Next Week:==<br />
[[File:NextWeek.png | center | 1200x800px]]<br />
==Week after Next==<br />
[[File:NextNextWeek.png | center | 1200x800px]]<br />
==Overview of Current Month:==<br />
'''''!! Monthly preview may be incomplete, as it is reconstructed only from the reservation confirmation emails.'''''<br />
[[File:March2024.png| center | 1000x600px]] --!></div>Amnamysthttps://megcore.nih.gov/index.php/ENIGMA_MEG_Pipeline_FAQENIGMA MEG Pipeline FAQ2024-02-14T13:55:08Z<p>Amnamyst: /* Don't see your question? */</p>
<hr />
<div>''Under construction -- If there is a topic you would like added, reach out to Anna Namyst. :)''<br />
<br />
[https://github.com/jstout211/enigma_MEG ENIGMA MEG Pipeline on GitHub]<br />
<br />
The programs in this package perform the full processing pipeline for the ENIGMA BIDS working group. This suite requires that your data be organized in BIDS format<br />
<br />
==Preparation==<br />
<br />
;Why BIDS?<br />
:Brain Imaging Data Structure (BIDS) provides a standard for organizing neuroimaging data.<br />
<br />
;My data isn't in BIDS -- how can I get it into that format?<br />
:If you need a tool for that you can use enigma_anonymization_lite. The core tool is process_meg.py, which performs all processing steps for the anatomical MRI and the associated MEG. You can either process a single subject, or you can loop over all subjects in batch mode. <br />
<br />
;How do I batch process my data?<br />
:In order to do batch processing, you must first run parse_bids.py to produce a .csv file manifest of all available MEG scans (or use some other method to generate the .csv file). <br />
<br />
;Does the pipeline include artifact correction?<br />
:Yes, there are two methods of artifact correction supported. The first is ica with manual identification of ica components. This is likely the most accurate, if you have a very small dataset to process and you have lots of time. The second method is to use ica with MEGnet automated classification of artifact components. MEGnet was retrained on data from the CTF, Elekta/MEGIN, 4D, and KIT data. The model classifies components with >98% accuracy, so this is also an excellent option.<br />
<br />
;What about Quality Assurance?<br />
:Once all the processing is complete, you can generate QA images using prep_QA.py. Like process_meg.py, prep_QA.py will operate either on a single subject or on all subjects listed in a the .csv file produced by parse_bids.py. Once the .png files are created, you can use Run_enigma_QA_GUI.py to interactively label your subject images as good or bad.<br />
<br />
;My Subject .csv file is not being read. Help!<br />
:Check your file for errors. <br />
::Does your path name have a leading space? <br />
::Did Excel delete the leading zero from your run numbers? <br />
::Does your path have any errors?<br />
<br />
==Main Processing Pipeline==<br />
''Under construction''<br />
<br />
==Don't see your question?==<br />
;If you've identified a script problem-<br />
:Open an issue on GitHub.<br />
;Otherwise-<br />
:Email Anna Namyst and Jeff Stout for troubleshooting help.</div>Amnamysthttps://megcore.nih.gov/index.php/Biowulf_standard_processingBiowulf standard processing2024-02-09T18:04:10Z<p>Jstout: </p>
<hr />
<div>= Quick tutorial to generate most of the needed inputs for data analysis =<br />
<br />
== General Setup ==<br />
=== Mounting local computer to biowulf ===<br />
https://hpc.nih.gov/docs/hpcdrive.html<br />
== Configuring your bash shell environment ==<br />
If editing your bashrc -- open two terminals in biowulf. If you misconfigure your .bashrc, you will not be able to log into biowulf. Having two terminals open allows you to fix anything that errors out.<br />
=== Edit .bashrc file in your home drive ===<br />
umask 002 #Gives automatic group permissions to every file you create -- very very helpful for working with your team<br />
<br />
#Add modules bin path to access the MEG modules<br />
PATH=/data/MEGmodules/bin:$PATH<br />
<br />
## Set up some aliases, so you don't have to type these out<br />
alias sinteractive_small='sinteractive --mem=8G --cpus-per-task=4 --gres=lscratch:30'<br />
alias sinteractive_medium='sinteractive --mem=16G --cpus-per-task=12 --gres=lscratch:100'<br />
alias sinteractive_large='sinteractive --mem=24G --cpus-per-task=32 --gres=lscratch:150'<br />
<br />
=== Edit .bash_profile in your home drive ===<br />
#Set your default group (normally your default is your userID - which isn't helpful for your group<br />
#Type `groups` to see which groups you are part of<br />
newgrp <<YOUR GROUP ID>><br />
<br />
=== To Access Additional MEG modules ===<br />
#Add the following line to your ${HOME}/.bashrc<br />
module use --append /data/MEGmodules/modulefiles<br />
<br />
== General Processing Pipeline == <br />
=== First write out the appropriate events in the markerfile ===<br />
TBD - fill in items here<br />
<br />
=== Convert Data to BIDS === <br />
module load mne<br />
make_meg_bids.py -h #Then fill in the required items<br />
<br />
== Processing the MRI related components == <br />
#Generate Bounary Element Model / Transform / Forward model for source localization. Check the help, there is a volume vs surface flag<br />
#This will write out the swarmfile (which can be editted) -- and prints out the command to launch the swarm job<br />
megcore_prep_mri_bids.py -gen_swarmfile -bids_root <<BIDS_ROOT>> -swarm_fname <<SWARM_FNAME>> -subject <<SUBJECT>> -run <<RUN>> -session <<SESSION>> -task <<TASK>><br />
<br />
<br />
<br />
<br />
<br />
<br />
== Parsing Logfiles ==<br />
Presentation Files:<br />
https://github.com/gjcooper/prespy</div>Jstout