This is the accompanying python code for the 2023 Information Processing in Medical Imaging (IPMI) Conference manuscript " Scalable Orthonormal Projective NMF via Diversified Stochastic Optimization". This implementation uses stochastic optimization using either random uniform sampling of determinantal point processes (DPP) to perform stochastic learning with orthonormal projective Nonnegative Matrix Factorization (opNMF from Linear and Nonlinear Projective Nonnegative Matrix Factorization), reducing the memory footprint of the method and improving its scalability to big data. The opNMF implementation for neuroimaging context is a stripped down port of the matlab opnmf.m
and opnmf_mem.m
codes found at brainparts github repository to python. Portions of the DPP implementations were adapted matlab implementation at Dr. Alex Kulesza's website, python implementation of DPP in HYDRA github repository, and python implementation of DPP inside Hydra found at MLNI github repository.
This code was tested on Linux (Rocky Linux release 8.8) operating system using python installed through anaconda. The following python packages, available either through the default repository or the conda-forge repository, are required to run this code.
Package Name | Package Version Tested | Notes |
---|---|---|
numpy | 1.20.3 | |
scipy | 1.7.1 | |
scikit-learn | 1.0.1 | |
hdf5storage | 0.1.16 | this is used to save and load input, intermediate, and final output files in compressed hdf5 format that can also be loaded in MatLab. |
- You will need to prepare a nonnegative input data matrix in hdf5 compressed format with
.mat
extension, saved with variable nameX
. - For example, you can create a random data matrix of size of size 5000 by 1000, save it to a
input.mat
file, using the following python snippet.import numpy as np import hdf5storage X = np.abs(np.random.rand(5000, 1000)) hdf5storage.savemat("./input.mat", mdict = {"X": X})
- Note that unlike brainparts github repository implementation that has additional preprocessing to remove common zero pixels/voxels across all columns and to downsample the
X
matrix prior to multiplicative updates, this implementation does NOT have such preprocessing. The appropriate preprocessing and downsampling is left to the end user to carry out prior to calling this code snippet. - Also note that the input data matrix
X
has to be non-negative!
- Note that unlike brainparts github repository implementation that has additional preprocessing to remove common zero pixels/voxels across all columns and to downsample the
- Determine how many components you want to generate? We will call this value target rank. Default is
$20$ . - Determine how many iterations to run before terminating the multiplicative updates. Default is
$1.0 \times 10^4$ . - Determine what early stopping criterion threshold to use such that if
$\frac{{(\| {W}_{t+1} - {W}_{t} \|)}^{2}_{F} }{ {(\| W_{t+1} \|)}^{2}_{F} } < tol$ .- If the condition is met, the update will terminate. Default is
$1.0 \times 10^{-6}$ .
- If the condition is met, the update will terminate. Default is
- Determine how you want your initial component matrix
$W$ to be intialized. Default is to use nndsvd. - Determine where the outputs will be generated.
- Call the
Python/opnmf.py
script with appropriate parameters.- for opNMF with speed optimized but high memory footprint resource usage:
python opnmf.py --inputFile="/path/to/input.mat" --outputParentDir="/path/to/output/directory" --targetRank=20 --initMeth="nndsvd" --maxIter=10000 --updateMeth="original" --tol=0.000001
- for opNMF with memory optimized but speed inefficient resource usage:
python opnmf.py --inputFile="/path/to/input.mat" --outputParentDir="/path/to/output/directory" --targetRank=20 --initMeth="nndsvd" --maxIter=10000 --updateMeth="mem" --tol=0.000001
- You will need to prepare a nonnegative input data matrix in hdf5 compressed format with
.mat
extension, saved with variable nameX
. - For example, you can create a random data matrix of size of size 5000 by 1000, save it to a
input.mat
file, using the following python snippet.import numpy as np import hdf5storage X = np.abs(np.random.rand(5000, 1000)) hdf5storage.savemat("./input.mat", mdict = {"X": X})
- Note that unlike brainparts github repository implementation that has additional preprocessing to remove common zero pixels/voxels across all columns and to downsample the
X
matrix prior to multiplicative updates, this implementation does NOT have such preprocessing. The appropriate preprocessing and downsampling is left to the end user to carry out prior to calling this code snippet. - Also note that the input data matrix
X
has to be non-negative!
- Note that unlike brainparts github repository implementation that has additional preprocessing to remove common zero pixels/voxels across all columns and to downsample the
- Determine how many components you want to generate? We will call this value target rank. Default is
$20$ . - Determine how many iterations to run before terminating the multiplicative updates. Default is
$1.0 \times 10^4$ . - Determine what early stopping criterion threshold to use such that if
$\frac{{(\| {W}_{t+1} - {W}_{t} \|)}^{2}_{F} }{ {(\| W_{t+1} \|)}^{2}_{F} } < tol$ .- If the condition is met, the update will terminate. Default is
$1.0 \times 10^{-6}$ .
- If the condition is met, the update will terminate. Default is
- Determine how you want your initial component matrix
$W$ to be intialized. Default is to use nndsvd. - Determine where the outputs will be generated.
- Determine how big you want your mini-batch to be. Default value is
$p=$ batch_size
$= 100$ .- Note that current implementation does not take into account the edge case scenario where the total number of subjects
$n$ (= number of subjects = number of columns ofX
) divided by the number of subjects in mini-batch$p$ results in dividend other than 0.
- Note that current implementation does not take into account the edge case scenario where the total number of subjects
- Determine what sampling method you want to use to sample the mini-batches.
-
--samplingMeth=uniform
: uniform random sampling- If you want uniform random sampling with replacement, make sure to also call the flag
--withReplacement
- If you do not call
--withReplacement
flag, then it will perform uniform random sampling without replacement.
- If you want uniform random sampling with replacement, make sure to also call the flag
-
--samplingMeth=dpplinear
: Using DPP on $ X^{T}X $ as the kernel. -
--samplingMeth=dppgaussian
: Using DPP on some csv file of $ n + 1 $ for $ X \in \mathbb{R}^{(m \times n)} $ with$n$ subjects. The file expects two variables as headers of the csv file,age
andsex
. The csv file path should be provided by--demographic_data_path=/path/to/file.csv
argument.- Use
--sigma
argument to provide $ \sigma $ for the Gaussian kernel.
- Use
-
- Call the
Python/sopnmf.py
script with appropriate parameters.- for sopNMF with uniform sampling without replacement:
python sopnmf.py --inputFile="/path/to/input.mat" --outputParentDir="/path/to/output/directory" --targetRank=20 --initMeth="nndsvd" --maxEpoch=10000 --multiplicativeUpdateMeth="normalize" --tol=0.000001 --sampleSize=100 --samplingMeth="uniform" --updateMeth="mem"
- for sopNMF with uniform sampling with replacement:
python sopnmf.py --inputFile="/path/to/input.mat" --outputParentDir="/path/to/output/directory" --targetRank=20 --initMeth="nndsvd" --maxEpoch=10000 --multiplicativeUpdateMeth="normalize" --tol=0.000001 --sampleSize=100 --samplingMeth="uniform" --withReplacement --updateMeth="mem"
- for dsopNMF with DPP gaussian sampling on demographic data:
python sopnmf.py --inputFile="/path/to/input.mat" --outputParentDir="/path/to/output/directory" --targetRank=20 --initMeth="nndsvd" --maxEpoch=10000 --multiplicativeUpdateMeth="normalize" --tol=0.000001 --sampleSize=100 --samplingMeth="dppgaussian" --sigma=0.1 --demographic_data_path="/path/to/demographic/age/sex/data.csv" --updateMeth="mem"
- for dsopNMF with DPP linear sampling on
$X^{T}X$ :
python sopnmf.py --inputFile="/path/to/input.mat" --outputParentDir="/path/to/output/directory" --targetRank=20 --initMeth="nndsvd" --maxEpoch=10000 --multiplicativeUpdateMeth="normalize" --tol=0.000001 --sampleSize=100 --samplingMeth="dpplinear" --updateMeth="mem"