minigrad is a toy implementation of an autograd engine, inspired by Andrej Karpathy's micrograd project, with and API similar to that of PyTorch. It supports both CPU and GPU operations using NumPy
and CuPy
, respectively.
My goal with minigrad is to explore and learn the internals of deep learning frameworks — minigrad is not optimized for speed or efficiency. As of now it provides basic tensor operations, back propagation, neural network layers, optimizers, and loss functions.
- Tensor Operations: Supports basic tensor operations with automatic differentiation.
- CPU and GPU Support: Use
NumPy
for CPU operations andCuPy
for GPU operations. - Neural Networks: Includes simple layers like
Linear
and activations likeReLU
- Optimizers: Implements basic optimizers like
SGD
. - Loss Functions: Provides loss functions like
MSELoss
.
To install minigrad:
pip install minigrad-python
import minigrad as mg
a = mg.Tensor([1, 2, 3], device='gpu')
b = mg.Tensor([2, 3, 4], device='gpu')
c = a + b
d = c.sum()
d.backward()
print(a.grad)
print(b.grad)
- A very simple example of training a neural network can be found in
examples/simple_nn.py
- TODO: Add more examples.
To run the unit tests and benchmarks for the project
pytest