Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Andy-wyx committed Dec 14, 2023
1 parent a167e3d commit a1a6736
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 14 deletions.
39 changes: 27 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,24 @@
# Biologically plausible neural networks

(Group Project of CLPS1291 Fall 23 @ Brown)

In the pursuit of artificial intelligence that mirrors the deftness of human cognition, the concept of biological plausibility stands as a beacon, guiding the design of neural networks toward the intricate workings of the human brain. A neural network that is considered biologically plausible emulates the structure and functions of the biological nervous system, often with the purpose of improving the performance of neural networks or gaining insights into processes of the biological brain.

While backpropagation (BP) is a cornerstone in training modern neural networks, it deviates from how biological neural systems function. Key differences include BP's reliance on inter-layer weight dynamics, unlike the local information transmission in biological neurons, its use of symmetric weights for both forward and backward passes which contrasts with the one-directional, asymmetric nature of biological synapses, and its continuous output neuron firing, as opposed to the all-or-none firing based on a threshold in biological neurons.
Recognizing these discrepancies, this project focuses on exploring neural network techniques that better mimic human brain functions. The aim is to investigate how these biologically inspired alternatives to backpropagation could enhance the performance and interpretability of neural networks.

A full version report can be found here: [LINK TO PDF]
A full version final report can be found here: [LINK TO PDF]

# Requirements

* Python
* numpy
* torch
* torchvision
* matplotlib
* CUDA (for hybridBio_learning)

## Folder Explanation
# Folder Explanation

**Feedback_Alignment**:
* A Pytorch implementation of [Random synaptic feedback weights support error backpropagation for deep learning](https://www.nature.com/articles/ncomms13276) based on [L0SG/feedback-alignment-pytorch](https://github.com/L0SG/feedback-alignment-pytorch)
Expand All @@ -24,23 +35,27 @@ A full version report can be found here: [LINK TO PDF]

**hybridBio_learning**:
* A PyTorch implementation of [Unsupervised learning by competing hidden units](https://www.pnas.org/doi/10.1073/pnas.1820458116) MNIST classifier based on [gatapia/unsupervised_bio_classifier](https://github.com/gatapia/unsupervised_bio_classifier), combining with Feedback alignment. Original descriptive documentation can be found at [here](https://github.com/clps1291-bioplausnn/hybrid-bioLearning).
* Experiments on the blend of Krotov's unsupervised layers w/o biocells and Linear Layers (Krotov's HebbNet w/o Biocells + fc)
* Experiments on the blend of Krotov's unsupervised layers w biocells and Linear Layers (Krotov's HebbNet w Biocells + fc)
* Exploration on the blend of Krotov's unsupervised layers w or w/o biocells and Feedback Alignment Layers (Krotov's HebbNet + FA)
* Experiments on
* the blend of Krotov's unsupervised layers w/o biocells and Linear Layers (Krotov's HebbNet w/o Biocells + fc)
* the blend of Krotov's unsupervised layers w biocells and Linear Layers (Krotov's HebbNet w Biocells + fc)
* the blend of Krotov's unsupervised layers w or w/o biocells and Feedback Alignment Layers (Krotov's HebbNet + FA)

<p align="center">
<img src="hybridBio_learning/images/accuracy across hybrid models.jpeg" width=600><br/>
</p>

# Analysis

# Reference:
See the final report: [Link to PDF]

1. [PyTorch CIFAR10 by huyvnphan](https://github.com/huyvnphan/PyTorch_CIFAR10)
# Future Work
* Enable GPU mode for semiHebbNet
* Try to train semiHebbNet in one phase, find the best learning rate for Hebbian layers and linear layers respectively.
* More Hyperparameter tuning on these models to compare their Peak Accuracy.
* Compare Efficiency in the same experimental settings.(same epoch, dataset, lr, hardward etc)
* Explore more Biologically Plausible Neural Networks e.g. [SCALING FORWARD GRADIENT WITH LOCAL LOSSES](https://arxiv.org/abs/2210.03310)

2. [MNIST_database](https://en.wikipedia.org/wiki/MNIST_database)

3. [Unsupervised Bio Classifier](https://github.com/gatapia/unsupervised_bio_classifier)
# Useful Resources
Except for torchvision models, [GluonCV](https://github.com/dmlc/gluon-cv/tree/master/gluoncv/model_zoo) includes many pretrained sota models in CV.

4. [Linear FA implementation](https://github.com/L0SG/feedback-alignment-pytorch)

Except for torchvision models, [GluonCV](https://github.com/dmlc/gluon-cv/tree/master/gluoncv/model_zoo) includes many pretrained sota models in CV.
4 changes: 2 additions & 2 deletions semiHebb_learning/main_comparison.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
"source": [
"# Model Performance Comparison on CIFAR10\n",
"\n",
"five semihebbnets, with the same network depth, the same #neurons on each layer, the same training (1 epoch). we evaluate and compare performance across different HebbLayers & LinearLayers combinations."
"four semihebbnets, with the same network depth, the same #neurons on each layer, the same training (1 epoch). we evaluate and compare performance across different HebbLayers & LinearLayers combinations."
]
},
{
Expand Down Expand Up @@ -262,7 +262,7 @@
"source": [
"# Model Performance Comparison on MNIST\n",
"\n",
"five semihebbnets, with the same network depth, the same #neurons on each layer, the same training (1 epoch). we evaluate and compare performance across different HebbLayers & LinearLayers combinations."
"four semihebbnets, with the same network depth, the same #neurons on each layer, the same training (1 epoch). we evaluate and compare performance across different HebbLayers & LinearLayers combinations."
]
},
{
Expand Down

0 comments on commit a1a6736

Please sign in to comment.