Welcome to the Feedforward Neural Network project! This project is a simple yet powerful implementation of a neural network built in C99 to classify 28x28 pixel images of digits. 📊
PS. Everything I do is on WSL2 runnig Debian 12 (Bookworm)
- 🖼️ Image Input: 28x28 grayscale images (e.g., from the MNIST dataset).
- ⚡ Activation Function: ReLU (Rectified Linear Unit) for non-linear activation.
- 🔄 Learning Method: Backpropagation for optimizing the network.
- 🧮 Loss Function: Cross-entropy to calculate the model's performance.
- 🔧 Adjustable Hyperparameters: Easily customize layer sizes, learning rates, etc.
.
├── src
│ ├── main.c
│ └── softmax.c
├── include
│ └── softmax.h
├── data
│ ├── data_organizer.py
│ ├── test_images.txt
│ ├── test_lables.txt
│ ├── training_images.txt
│ └── training_lables.txt
├── .gitignore
├── makefile
├── LICENSE
└── README.md # You're looking at it! 📄
- Clone the Repository:
git clone https://github.com/username/feedforward-neural-network.git
cd feedforward-neural-network
- Run the makefile:
make
OR
Compile the Code: Make sure you have a C99-compatible compiler (like gcc):
gcc -std=c99 -o prg src/main.c src/softmax.c -Iinclude -lm
- Run the Neural Network:
./prg
-
Input Layer:
The network takes 28x28 pixel images (flattened to a 784-dimensional vector) as input. Each pixel is normalized to the range [0, 1]. -
Hidden Layers:
Uses fully connected hidden layers with ReLU activation. The number of hidden layers and neurons can be easily adjusted in the configuration. -
Output Layer:
The output layer contains 10 neurons corresponding to the 10 digit classes (0-9). The softmax function is applied to convert raw scores into probabilities. -
Training:
The network uses backpropagation and the cross-entropy loss function to update weights and biases during training.
🧑💻 After training for several epochs, the neural network can achieve high accuracy on the test set of digit images. You can tweak the learning rate, number of layers, or number of neurons to improve performance.
- Learning Rate: Adjustable in the
main.c
. - Number of Hidden Layers: Configurable in the
main.c
. - Number of Neurons: You can set the number of neurons for each hidden layer.
After training, you should expect to see results like:
- Training Accuracy: ~98%
- Test Accuracy: ~97%
We welcome contributions from the community! Feel free to submit pull requests or open issues.
This project is licensed under the MIT License. See the LICENSE file for details.
Happy coding! 😎