First, you need to install CaImAn environment: https://github.com/flatironinstitute/CaImAn
In brief, all you need for getting CaImAn installed is to type the following commands in your Anaconda (or miniconda) prompt:
install mamba in base environment: conda install -n base -c conda-forge mamba
install caiman (enter desired venv name instead of <NEW_ENV_NAME>): mamba create -n <NEW_ENV_NAME> -c conda-forge caiman
activate virtual environment: conda activate caiman
Install dependencies: pip install moviepy PySide6 wgpu glfw fastplotlib jupyter_rfb sidecar sortedcontainers cmasher opencv-python ssqueezepy
Then, you need to clone this repo to your PC. You may do it by downloading .zip file (see the button above) and unpacking it, OR you may use your git client and type git clone -b public https://github.com/iabs-neuro/bearmind
in a command prompt.
If you have some trouble with mamba, you can try libmamba, which is a more conda-friendly solver, more information can be seen here:
https://www.anaconda.com/blog/conda-is-fast-now
https://conda.github.io/conda-libmamba-solver/user-guide/
To do this, you may need first to update your conda distribution:
conda update -n base conda
Install the libmamba solver:
conda install -n base conda-libmamba-solver
You can set this solver as the default one by running
conda config --set solver libmamba
and then create the caiman environment:
conda create -n caiman -c conda-forge caiman
OR, you can use the libmamba solver just for this time:
conda create -n caiman -c conda-forge caiman --solver=libmamba
Finally, activate the caiman environment:
conda activate caiman
Launch BEARMiND_demo.ipynb in a Jupyter Lab or in a Jupyter notebook, and follow the instructions. Typically you may want to duplicate the pipeline for each new user and/or experiment, but bearkeep in mind that all .py files with this repo should present in the folder where you are launching the pipeline from.
Here is a brief description of the main stages:
Here user can inspect raw miniscopic videos and define the optimal field of view. These parameters can be saved and copied for different imaging sessions.
INPUTS: Miniscopic calcium imaging data (.avi files)
OUTPUTS: Python archives (.pickle) with cropping parameters stored in the same folder with the .avi files.
Batch cropping. Here native miniscopic .avi files are cropped with respect to previously saved crops, concatenated and saved as .tif files in the working directory. Timestamps are copied as well.
INPUTS: Natively stored miniscopic data and saved crop files
OUTPUTS: Cropped .tif files along with timestamps
Is based on the NoRMCorre piece-wise rigid motion correction routine [Pnevmatikakis & Giovanucci, 2017].
INPUTS: cropped .tif files
OUTPUTS: motion corrected .tif files
Here the user can load a limited amount of data and interactively adjust the key CNMF parameters:
● gSig, the kernel size of the gaussian filter applied to the data for the proper segmentation of putative neurons
● min_corr – minimal correlation value on the matrix for seeding a neuron
● min_pnr – another threshold for seeding a neuron, minmal peak-to-noise ratio (PNR) of the time traces, corresponding to each pixel of the data.
This module can be launched once for a batch of data from the same animal.
INPUTS: Motion corrected .tif files
OUTPUTS: None
Is based on the “vanilla” CaImAn routine, described in many details in [Giovanucci et al., 2019].
INPUTS: Motion corrected .tif files
OUTPUTS: CNMF results (estimates objects) saved as .pickle files
Here the user can load and inspect the results obtained by the CNMF routine in the module 3. The Bokeh-based interface supports simultaneous selection of neural contours along with their time traces. Both of the plots are panable and scalable. The user can manually delete or merge one or several selected components. In the latter case, the spatial component with the highest signal-to-noise ratio is kept, and the resulting trace is recalculated and this new neural unit is placed to the end of the list. The results of the analysis can be saved in human- readable format (.tif images and .csv tables).
INPUTS: CNMF results (estimates objects) saved as .pickle files, obtained in Module 3; miniscopic timpestamp files
OUTPUTS: the collection of .tif files with neural contours, stored in a separate folder; .csv table with time traces, the first column is a timestamp, the other are the traces numerated in the same order as the contours; .mat files with the array of contours for further matching of neurons between sessions which can be done by the CellReg [Sheintuch et al., 2017] routine.
For the further analysis of neural data along with animal’s behavior it’s often needed to deal with discrete events instead of continuous calcium traces. However, significant events should be separated from noise. Here, user is offered two different ways of event detection: by thresholding of local maxima with subsequent fit and by wavelet transform.
INPUTS: timestamped .csv tables with traces
OUTPUTS: .csv tables of the same size as traces with discrete (i.e., 0/1) event notation; separate .pickle files with all event parameters
Here is a list of known bugs which is by no means not full. Please report bugs in 'Issues' (see the button above).
ERROR:bokeh.server.views.ws:Refusing websocket connection from Origin 'http://localhost:8891'; use --allow-websocket-origin=localhost:8891 or set BOKEH_ALLOW_WS_ORIGIN=localhost:8891 to permit this; currently we allow origins {'localhost:8888'}
WARNING:tornado.access:403 GET /ws (::1) 0.00ms
To deal with it, run the cell below with a proper port number, which you can take from the address string of your browser.