Low level implementation of basic and LSTM Recurrent units
Using numpy only, I have implemented the forward propagation of a standard RNN and an LSTM RNN.
Note: Math explinations and discussions are inspired from Deep Learning Specialization (with Andrew ng) offered by Coursera.
Working on deriving backpropagation...
Pull requests are welcomed. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests as appropriate.