From 7ee30e4668fa1183d3df0d57a4800795b6f5ef8d Mon Sep 17 00:00:00 2001 From: Francesco Innocenti Date: Wed, 27 Nov 2024 16:27:36 +0000 Subject: [PATCH] Update docs. --- README.md | 9 +++++---- docs/advanced_usage.md | 13 +++++++++---- docs/basic_usage.md | 4 ++-- docs/index.md | 4 ++-- 4 files changed, 18 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index a7242dd..e1d58c3 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,8 @@ optimisers, especially for deeper models. diagnose issues with PCNs. If you're new to JPC, we recommend starting from the [ -tutorial notebooks](https://thebuckleylab.github.io/jpc/examples/discriminative_pc/). +tutorial notebooks](https://thebuckleylab.github.io/jpc/examples/discriminative_pc/) +and checking the [documentation](https://thebuckleylab.github.io/jpc/). ## Overview * [Installation](#installation) @@ -60,7 +61,7 @@ Available at https://thebuckleylab.github.io/jpc/. ## ⚡️ Quick example Use `jpc.make_pc_step` to update the parameters of any neural network compatible -with PC updates (see [examples +with PC updates (see the [notebook examples ](https://thebuckleylab.github.io/jpc/examples/discriminative_pc/)) ```py import jax.random as jr @@ -95,8 +96,8 @@ model = result["model"] optim, opt_state = result["optim"], result["opt_state"] ``` Under the hood, `jpc.make_pc_step` -1. integrates the inference (activity) dynamics using a [Diffrax](https://github.com/patrick-kidger/diffrax) ODE solver, and -2. updates model parameters at the numerical solution of the activities with a given [Optax](https://github.com/google-deepmind/optax) optimiser. +1. integrates the inference (activity) dynamics using a [diffrax](https://github.com/patrick-kidger/diffrax) ODE solver, and +2. updates model parameters at the numerical solution of the activities with a given [optax](https://github.com/google-deepmind/optax) optimiser. See the [documentation](https://thebuckleylab.github.io/jpc/) for more details. diff --git a/docs/advanced_usage.md b/docs/advanced_usage.md index 5b1a09d..73fe4d4 100644 --- a/docs/advanced_usage.md +++ b/docs/advanced_usage.md @@ -32,8 +32,10 @@ param_optim = param_update_result["optim"] param_opt_state = param_update_result["opt_state"] ``` which can be embedded in a jitted function with any other additional -computations. One can also use any Optax optimiser to equilibrate the inference -dynamics by replacing the function in step 2, as shown below. +computations. One can also use any [optax +](https://optax.readthedocs.io/en/latest/api/optimizers.html) optimiser to +equilibrate the inference dynamics by replacing the function in step 2, as +shown below. ```py activity_optim = optax.sgd(1e-3) @@ -60,8 +62,11 @@ for t in range(T): # 3. update parameters at the activities' solution with PC ... ``` -JPC also comes with some analytical tools that can be used to study and -potentially diagnose issues with PCNs (see [docs +See the [updates docs +](https://thebuckleylab.github.io/jpc/api/Updates/) for more details. JPC also +comes with some analytical tools that can be used to study and potentially +diagnose issues with PCNs +(see [docs ](https://thebuckleylab.github.io/jpc/api/Analytical%20tools/) and [example notebook ](https://thebuckleylab.github.io/jpc/examples/linear_net_theoretical_energy/)). diff --git a/docs/basic_usage.md b/docs/basic_usage.md index 0497242..815bb38 100644 --- a/docs/basic_usage.md +++ b/docs/basic_usage.md @@ -39,7 +39,7 @@ update_result = jpc.make_pc_step( model = update_result["model"] optim, opt_state = update_result["optim"], update_result["opt_state"] ``` -As shown above, at a minimum `jpc.make_pc_step` takes a model, an [Optax +As shown above, at a minimum `jpc.make_pc_step` takes a model, an [optax ](https://github.com/google-deepmind/optax) optimiser and its state, and some data. The model needs to be compatible with PC updates in the sense that it's split into callable layers (see the @@ -50,7 +50,7 @@ that the `input` is actually not needed for unsupervised training. In fact, supervised as well as unsupervised training (again see the [example notebooks ](https://thebuckleylab.github.io/jpc/examples/discriminative_pc/)). -Under the hood, `jpc.make_pc_step` uses [Diffrax +Under the hood, `jpc.make_pc_step` uses [diffrax ](https://github.com/patrick-kidger/diffrax) to solve the activity (inference) dynamics of PC. Many default arguments, for example related to the ODE solver, can be changed, including the ODE solver, and there is an option to record a diff --git a/docs/index.md b/docs/index.md index a85d20a..7a0e771 100644 --- a/docs/index.md +++ b/docs/index.md @@ -77,8 +77,8 @@ optim, opt_state = result["optim"], result["opt_state"] ``` Under the hood, `jpc.make_pc_step` -1. integrates the inference (activity) dynamics using a [Diffrax](https://github.com/patrick-kidger/diffrax) ODE solver, and -2. updates model parameters at the numerical solution of the activities with a given [Optax](https://github.com/google-deepmind/optax) optimiser. +1. integrates the inference (activity) dynamics using a [diffrax](https://github.com/patrick-kidger/diffrax) ODE solver, and +2. updates model parameters at the numerical solution of the activities with a given [optax](https://github.com/google-deepmind/optax) optimiser. > **NOTE**: All convenience training and test functions such as `make_pc_step` > are already "jitted" (for optimised performance) for the user's convenience.