Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MELM Development #12

Open
jkbchouinard opened this issue Jan 15, 2025 · 0 comments
Open

MELM Development #12

jkbchouinard opened this issue Jan 15, 2025 · 0 comments

Comments

@jkbchouinard
Copy link
Collaborator

As we explored in our regular meeting, we can create learning_rule connections in Nengo to have the weights between ensembles be automatically adjusted:

with nengo.Network() as model:
    in_node = nengo.Node("spiketrain")
    target_node = nengo.Node("target kinematic") # Can also be integrated into in_node as another dim or two
    inhib_node = nengo.Node(output=lambda t: t >= training_time)

    rep_ens = nengo.Ensemble("neurons1", "dimensions1", "radius1")
    out_ens = nengo.Ensemble("neurons2", "dimensions2", "radius2")
    err_ens = nengo.Ensemble("neurons3", "dimension2", "radius3")

    in_rep_con = nengo.Connection(in_node, rep_ens, "synapse") # Acts as a low-pass for the spike input
    rep_out_con = nengo.Connection(rep_ens, out_ens, "(initial) transform", "learning rule type")
    out_err_con = nengo.Connection(out_ens, err_ens)
    tar_err_con = nengo.Connection(target_node, err_ens, transform=-1)
    err_lrn_con = nengo.Connection(err_ens, rep_out_con.learning_rule) # Connects error ensemble value to learning rule -- analogous to how backprop uses error to follow stochastic gradient in training
    inhib_lrn_con = nengo.Connection(stop_node, err_ens.neurons, transform=-20 * np.ones((err_ens.n_neurons, 1))) # Inhibit error ensemble once training is done to prevent weight changes after training_time


    p_out = nengo.Probe(out_ens, "synapse")
    p_err = nengo.Probe(err_ens, "synapse")

We're interested in seeing how this model performs when trained and tested on our spiking dataset. Since we're dealing with the brain, our ideal output is some form of acceleration signal, though you're welcome to try other kinematic variables as an exploratory alternative. Since we're concerned with performance over time, plotting the error (absolute and squared) over time could be a useful exercise in visualizing where the model struggles.

Similarly, it would be useful to generate visualizations of the MELM model using nengo-gui for a more visual and intuitive way to interpret the underlying architecture.

Happy Braining :)

  • Jake & Rai
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant