Skip to content

Commit

Permalink
Update docs.
Browse files Browse the repository at this point in the history
  • Loading branch information
francesco-innocenti committed Nov 27, 2024
1 parent 1001494 commit 7ee30e4
Show file tree
Hide file tree
Showing 4 changed files with 18 additions and 12 deletions.
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ optimisers, especially for deeper models.
diagnose issues with PCNs.

If you're new to JPC, we recommend starting from the [
tutorial notebooks](https://thebuckleylab.github.io/jpc/examples/discriminative_pc/).
tutorial notebooks](https://thebuckleylab.github.io/jpc/examples/discriminative_pc/)
and checking the [documentation](https://thebuckleylab.github.io/jpc/).

## Overview
* [Installation](#installation)
Expand Down Expand Up @@ -60,7 +61,7 @@ Available at https://thebuckleylab.github.io/jpc/.

## ⚡️ Quick example
Use `jpc.make_pc_step` to update the parameters of any neural network compatible
with PC updates (see [examples
with PC updates (see the [notebook examples
](https://thebuckleylab.github.io/jpc/examples/discriminative_pc/))
```py
import jax.random as jr
Expand Down Expand Up @@ -95,8 +96,8 @@ model = result["model"]
optim, opt_state = result["optim"], result["opt_state"]
```
Under the hood, `jpc.make_pc_step`
1. integrates the inference (activity) dynamics using a [Diffrax](https://github.com/patrick-kidger/diffrax) ODE solver, and
2. updates model parameters at the numerical solution of the activities with a given [Optax](https://github.com/google-deepmind/optax) optimiser.
1. integrates the inference (activity) dynamics using a [diffrax](https://github.com/patrick-kidger/diffrax) ODE solver, and
2. updates model parameters at the numerical solution of the activities with a given [optax](https://github.com/google-deepmind/optax) optimiser.

See the [documentation](https://thebuckleylab.github.io/jpc/) for more details.

Expand Down
13 changes: 9 additions & 4 deletions docs/advanced_usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,10 @@ param_optim = param_update_result["optim"]
param_opt_state = param_update_result["opt_state"]
```
which can be embedded in a jitted function with any other additional
computations. One can also use any Optax optimiser to equilibrate the inference
dynamics by replacing the function in step 2, as shown below.
computations. One can also use any [optax
](https://optax.readthedocs.io/en/latest/api/optimizers.html) optimiser to
equilibrate the inference dynamics by replacing the function in step 2, as
shown below.
```py
activity_optim = optax.sgd(1e-3)

Expand All @@ -60,8 +62,11 @@ for t in range(T):
# 3. update parameters at the activities' solution with PC
...
```
JPC also comes with some analytical tools that can be used to study and
potentially diagnose issues with PCNs (see [docs
See the [updates docs
](https://thebuckleylab.github.io/jpc/api/Updates/) for more details. JPC also
comes with some analytical tools that can be used to study and potentially
diagnose issues with PCNs
(see [docs
](https://thebuckleylab.github.io/jpc/api/Analytical%20tools/)
and [example notebook
](https://thebuckleylab.github.io/jpc/examples/linear_net_theoretical_energy/)).
4 changes: 2 additions & 2 deletions docs/basic_usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ update_result = jpc.make_pc_step(
model = update_result["model"]
optim, opt_state = update_result["optim"], update_result["opt_state"]
```
As shown above, at a minimum `jpc.make_pc_step` takes a model, an [Optax
As shown above, at a minimum `jpc.make_pc_step` takes a model, an [optax
](https://github.com/google-deepmind/optax) optimiser and its
state, and some data. The model needs to be compatible with PC updates in the
sense that it's split into callable layers (see the
Expand All @@ -50,7 +50,7 @@ that the `input` is actually not needed for unsupervised training. In fact,
supervised as well as unsupervised training (again see the [example notebooks
](https://thebuckleylab.github.io/jpc/examples/discriminative_pc/)).

Under the hood, `jpc.make_pc_step` uses [Diffrax
Under the hood, `jpc.make_pc_step` uses [diffrax
](https://github.com/patrick-kidger/diffrax) to solve the activity (inference)
dynamics of PC. Many default arguments, for example related to the ODE solver,
can be changed, including the ODE solver, and there is an option to record a
Expand Down
4 changes: 2 additions & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,8 @@ optim, opt_state = result["optim"], result["opt_state"]
```
Under the hood, `jpc.make_pc_step`

1. integrates the inference (activity) dynamics using a [Diffrax](https://github.com/patrick-kidger/diffrax) ODE solver, and
2. updates model parameters at the numerical solution of the activities with a given [Optax](https://github.com/google-deepmind/optax) optimiser.
1. integrates the inference (activity) dynamics using a [diffrax](https://github.com/patrick-kidger/diffrax) ODE solver, and
2. updates model parameters at the numerical solution of the activities with a given [optax](https://github.com/google-deepmind/optax) optimiser.

> **NOTE**: All convenience training and test functions such as `make_pc_step`
> are already "jitted" (for optimised performance) for the user's convenience.
Expand Down

0 comments on commit 7ee30e4

Please sign in to comment.