Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
example:bids [2018/09/14 12:56]
robert [Step 7: create the general sidecar files]
example:bids [2018/09/19 14:54] (current)
robert [Step 6: create the sidecar files for each dataset]
Line 25: Line 25:
 At this moment the data sharing has not been totally completed. Right now we are working on a publication that describes the details of this dataset, and the data is currently under review. Once completed, the data will be published on the [[https://​www.ru.nl/​donders/​research/​data/​|Donders Repository]] with DOI http://​hdl.handle.net/​11633/​di.dccn.DSC_3011020.09_236. At this moment the data sharing has not been totally completed. Right now we are working on a publication that describes the details of this dataset, and the data is currently under review. Once completed, the data will be published on the [[https://​www.ru.nl/​donders/​research/​data/​|Donders Repository]] with DOI http://​hdl.handle.net/​11633/​di.dccn.DSC_3011020.09_236.
 </​note>​ </​note>​
 +
 ===== Procedure =====  ​ ===== Procedure =====  ​
  
Line 96: Line 97:
  
 ==== Step 2: collect and convert MRI data from DICOM to NIFTI ==== ==== Step 2: collect and convert MRI data from DICOM to NIFTI ====
 +
 +In this section we are using [[https://​github.com/​rordenlab/​dcm2niix|dcm2niix]] not only to convert the DICOMs to nifti, but also to create the initial json sidecar files with the information about the MR scan parameters. In step 6 we will update the sidecar files with information that is not available in the DICOMs, such as the task instructions.
  
 <​code>​ <​code>​
Line 119: Line 122:
 done done
 </​code>​ </​code>​
- 
 ==== Step 3: collect and rename the CTF MEG datasets ==== ==== Step 3: collect and rename the CTF MEG datasets ====
 +
 +In this step we are copying and renaming the CTF datasets to the target location using a CTF command line utility. During this process, the identifying information about the subject (i.e name) is removed from the dataset. Since the "newDs -anon" option does not remove the time and date of the recording from the dataset, at the end we do another step to remove the date of acquisition from the res4 header file. We keep the time, as it is not unique enough to identify which recording goes with which participant. See also this [[/​faq/​how_can_i_anonymize_a_ctf_dataset|frequently asked question]].
  
 <​code>​ <​code>​
Line 192: Line 196:
 newDs -anon $RAW/​V1090/​meg/​V1090_301102009_02.ds ​ $BIDS/​sub-V1090/​meg/​sub-V1090_task-visual_run-1_meg.ds newDs -anon $RAW/​V1090/​meg/​V1090_301102009_02.ds ​ $BIDS/​sub-V1090/​meg/​sub-V1090_task-visual_run-1_meg.ds
 newDs -anon $RAW/​V1090/​meg/​V1090_301102009_03.ds ​ $BIDS/​sub-V1090/​meg/​sub-V1090_task-visual_run-2_meg.ds newDs -anon $RAW/​V1090/​meg/​V1090_301102009_03.ds ​ $BIDS/​sub-V1090/​meg/​sub-V1090_task-visual_run-2_meg.ds
 +
 +####################################################################################################​
 +# the anon option of newDs does not remove the date and/or time of the recording
 +####################################################################################################​
 +find $BIDS -name \*.res4 -exec $HOME/​bids-tools/​bin/​remove_ctf_datetime -d {} \;
 +
 </​code>​ </​code>​
  
-<​note>​At the bottom of the script, there are a few exceptions, which reflect datasets that did not convert well automatically. The reason for this is the fact that during data acquisition,​ the data ended up in two different *.ds datasets. According to BIDS, these are supposed to be represented by different '​runs'​.+<​note>​You can see a few exceptions, which reflect datasets that did not convert well automatically. The reason for this is the fact that during data acquisition,​ the data ended up in two different *.ds datasets. According to BIDS, these are supposed to be represented by different '​runs'​.
 </​note>​ </​note>​
 ==== Step 4: collect the NBS Presentation log files ==== ==== Step 4: collect the NBS Presentation log files ====
 +
 +All Presentation log files are copied from their original location to the sourcedata folder. Although in step 6 the events in the log files will be used to construct the events.tsv files, we want to keep (and share) the Presentation log files, as those contain slightly more information than what can be represented in the events.tsv.
 +
 +One issue is that the Presentation log files contains the exact date and time of the experiment. To avoid possible identification of participants,​ we are using [[https://​www.gnu.org/​software/​sed/​manual/​sed.html|sed]] to replace the time and date in the files.
  
 <​code>​ <​code>​
Line 214: Line 228:
 done done
 </​code>​ </​code>​
- 
 ==== Step 5: collect the MEG coregistered anatomical MRIs ==== ==== Step 5: collect the MEG coregistered anatomical MRIs ====
 +
 +The coregistration of the MEG recording with the anatomical MRI has been done on basis of the head localizer coils (placed at Nasion and on two [[/​faq/​how_are_the_lpa_and_rpa_points_defined|ear molds]] on either side), the anatomical landmarks (Nasion, LPA, RPA) and using the scalp surface that was recorded with the Polhemus. This coregistration was done using **[[:​reference:​ft_volumerealign]]** and the resulting anatomical MRI was saved back to disk in NIFTI format.
 +
 +Since the orientation of the CTF coregistered MRI has been flipped relative to the NIFTI file that was generated by dcm2niix, we are sharing both. The native one is most convenient for processing the functional MRI and DWI data, whereas the one in CTF space is most convenient for processing the MEG data.
 +
 +The CTF coregistered MRI gets the same json sidecar file as the one converted by dcm2niix, which will be updated in step 6 regarding the coordinate system.
 +
  
 <​code>​ <​code>​
Line 236: Line 256:
 done done
 </​code>​ </​code>​
- 
 ==== Step 6: create the sidecar files for each dataset ==== ==== Step 6: create the sidecar files for each dataset ====
  
Line 289: Line 308:
   ​   ​
   dataset = cat(1, anat(:), func(:), dwi(:), meg(:));   dataset = cat(1, anat(:), func(:), dwi(:), meg(:));
-  dataset = meg(:); 
   ​   ​
   for j=1:​numel(dataset)   for j=1:​numel(dataset)
Line 342: Line 360:
             switch exceptions_meg(k).extra             switch exceptions_meg(k).extra
               case '​fix_events'​               case '​fix_events'​
-                % this only works if you have the mous github ​repo on your +                % this requires ​you to have the mous github ​repository ​on your path
-                % path.+
                 if contains(cfg.dataset,​ '​auditory'​)                 if contains(cfg.dataset,​ '​auditory'​)
                   event = mous_read_event_audio(cfg.dataset);​                   event = mous_read_event_audio(cfg.dataset);​
Line 447: Line 464:
 </​code>​ </​code>​
  
-<​note>​ This script here deals with some dataset specific exceptions. Indeed, given the fact that we are working with real data here, due to various reasons, automatic conversions (one-size-fits-all) are likely to occasionally fail. In the current context, the tricky part happened to be the creation of the events.tsv files for the MEG task data. In order to create these files, data2bids attempts to align the experimental events, as extracted from the presentation software logfile, with the experimental events, as extracted from the digital trigger channel in the MEG data files. This only works well and unambiguously,​ if there'​s a one-to-one-mapping of the events (or a specific type of event) in the two representations. In the current example, there were occasional issues with the digital trigger channel, which precluded fully automatic processing of all files. The resulting example script above is therefore the result of several iterations to deal with the exceptions.+<​note>​ This script here deals with some dataset specific exceptions. Indeed, given the fact that we are working with real data here, due to various reasons, automatic conversions (one-size-fits-all) are likely to occasionally fail.  
 + 
 +In the current context, the tricky part happened to be the creation of the events.tsv files for the MEG task data. In order to create these files, data2bids attempts to align the experimental events, as extracted from the presentation software logfile, with the experimental events, as extracted from the digital trigger channel in the MEG data files. This only works well and unambiguously,​ if there'​s a one-to-one-mapping of the events (or a specific type of event) in the two representations. ​ 
 + 
 +In the current example, there were occasional issues with the digital trigger channel, which precluded fully automatic processing of all files. The resulting example script above is therefore the result of several iterations to deal with the exceptions.
 </​note>​ </​note>​
 ==== Step 7: create the general sidecar files ==== ==== Step 7: create the general sidecar files ====
Line 468: Line 489:
  
 Throughout the development of the scripts and and after having completed the conversion I used the [[http://​github.com/​INCF/​bids-validator/​|bids-validator]] to check compliance with BIDS. During script development it revealed errors and inconsistencies,​ which I fixed in the scripts (which I then reran). After the final conversion there were still some warnings printed, but the dataset passed the validator. Throughout the development of the scripts and and after having completed the conversion I used the [[http://​github.com/​INCF/​bids-validator/​|bids-validator]] to check compliance with BIDS. During script development it revealed errors and inconsistencies,​ which I fixed in the scripts (which I then reran). After the final conversion there were still some warnings printed, but the dataset passed the validator.
 +
 +===== Issues =====
 +
 +Although the scripts are presented in a linear fashion, the actual conversion of the whole dataset took some effort, especially in dealing with unexpected features or with exceptions in few subjects. This section describes some of the issues that we encountered.
 +
 +Due to CTF hardware problems, some subjects'​ task MEG data was not recorded in a single CTF dataset, but in two datasets. We dealt with this by copying them explicitly (not in the for-loop) in step 3.
 +
 +Due to misconfiguration of the Bitsi box ("​level mode"​),​ some subjects'​ task MEG data have the trigger codes represented incorrectly. The consequence is that the individual bits of the triggers overlap in time, causing the default trigger detection to fail. This is dealt with in step 6 by using the mous_read_event_audio function from the MOUS github repository.
 +
 +In some of the MEG recordings the default settings for event detection from the digital trigger channel resulted in a limited number of events being undetected, causing occasional failure of the alignment procedure between shared events. This was mostly caused by 2 events being too closely spaced in time, either or not in combination with a too wide trigger pulse, resulting in "​staircase-shaped"​ pulses. In case of such a mismatch between the number of trigger-channel-extracted events versus Presentation-log-file-extracted events, we defined another shared event for alignment. This is dealt with in step 6.
 +
 +The Presentation log files for the visual stimuli had an \<​enter\>​ after the period (.) at the end of each sentence. This caused the line in the log file to be broken in two, resulting in incorrect parsing of the log file in step 6. We dealt this by removing the \<​enter\>​ in the log files prior to step 6, i.e. in step 4.
  
 ===== References ===== ===== References =====
Line 474: Line 507:
   * [[https://​doi.org/​10.1016/​j.neuroimage.2016.03.007|Neural activity during sentence processing as reflected in theta, alpha, beta, and gamma oscillations.]] //Lam NHL, Schoffelen JM, Uddén J, Hultén A, Hagoort P.// Neuroimage. 2016 Nov 15;​142:​43-54. doi: 10.1016/​j.neuroimage.2016.03.007.   * [[https://​doi.org/​10.1016/​j.neuroimage.2016.03.007|Neural activity during sentence processing as reflected in theta, alpha, beta, and gamma oscillations.]] //Lam NHL, Schoffelen JM, Uddén J, Hultén A, Hagoort P.// Neuroimage. 2016 Nov 15;​142:​43-54. doi: 10.1016/​j.neuroimage.2016.03.007.
   * [[https://​doi.org/​10.1080/​23273798.2018.1437456|Robust neuronal oscillatory entrainment to speech displays individual variation in lateralisation]] //Lam NHL, Hultén A, Hagoort P & Schoffelen JM.// Language, Cognition and Neuroscience,​ 33:8, 943-954. doi: 10.1080/​23273798.2018.1437456.   * [[https://​doi.org/​10.1080/​23273798.2018.1437456|Robust neuronal oscillatory entrainment to speech displays individual variation in lateralisation]] //Lam NHL, Hultén A, Hagoort P & Schoffelen JM.// Language, Cognition and Neuroscience,​ 33:8, 943-954. doi: 10.1080/​23273798.2018.1437456.
-