This repo describes efforts to pipeline explicitly a neuroimaging data workflow that involves T1 MRI, CT, and iEEG data (ECoG, or SEEG). For ECoG data, we do not explicitly have a process outlined, but these are significantly easier since grids can be easily interpolated. See `Fieltrip Toolbox`_.
To start using the workflows with your data, see [workflow documentation](workflow/documentation.md) file.
- For a detailed description of the overall SEEK workflow, see workflow documentation.
- For a detailed description of the SEEK workflow of contact localization, specifically localizing the 2 points per electrode, see :doc:`localization guide <./localization_guide>`
For a description of the visualization engine, see: https://github.com/cronelab/ReconstructionVisualizer
See INSTALLATION GUIDE for full instructions. SEEK uses the Snakemake workflow management system to create the different workflows. We chose this because it is easy to run individual workflows, as well as an entire workflow from the command line. The full repository is set up similar to the cookiecutter Snakemake file: cookiecutter gh:snakemake-workflows/cookiecutter-snakemake-workflow.
The recommended installation is via Docker. See here for instructions on running workflows in the container are shown here below:
The Docker containers sit on Docker Hub, specifically at https://hub.docker.com/r/neuroseek/seek.
Setup: Note that the docker container names are:
- seek_reconstruction - seek_localization # tbd - seek_visualization # tbd
To setup the container in your system:
docker-compose up --build
Running workflows within the container: In another terminal, one can run the pipeline commands in terminal. .. code-block:: # run container and mount data directories docker run -v $PWD/Data:/data -it -e bids_root=/data -e derivatives_output_dir=/data/derivatives --rm neuroimg_pipeline_reconstruction bash
Running workflows **using** the container: .. code-block:: # run snakemake using the containers snakemake <rule_name> --use-singularity
For running individual pipelines, see INSTALLATION GUIDE.
If one wants to make a persistent data volume that reflects changes in the Docker container running Snakemake workflows,
then one can just make a data/
directory inside this repository. Then add in sourcedata. This
directory serves as the BIDS root of the workflows.
We use BIDS. See https://github.com/bids-standard/bids-starter-kit/wiki/The-BIDS-folder-hierarchy
Before data is converted to BIDS in seek/pipeline/01-prep
pipeline,
then sourcedata/
should contain a semi-structured format of the neuroimaging data that will
be put through the workflow.
sourcedata/
/{subject}/ - premri/*.dcm - posmri/*.dcm - postct/*.dcm
Seek was created and is maintained by Adam Li. It is also maintained and contributed by Christopher Coogan and other researchers in the NCSL and Crone lab. Contributions are more than welcome so feel free to contact me, open an issue or submit a pull request! See the :doc:`contribution guide <./doc/contributing>`.
To report a bug, please visit the GitHub repository.
Note that this program is provided with NO WARRANTY OF ANY KIND. If you can, always double check the results with a human researcher, or clinician.
If you want to cite Seek, please use the Zenodo for the repository.
Several functions of Seek essentially make use of existing software packages for neuroimaging analysis, including:
- For incorporation of DTI data, see ndmeg.