Skip to content

Commit

Permalink
Fix SVGP_Multitask_GP_Regression.ipynb (#2561)
Browse files Browse the repository at this point in the history
  • Loading branch information
flcello authored Aug 13, 2024
1 parent 283105b commit 5c7986e
Showing 1 changed file with 8 additions and 8 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,8 @@
"\n",
"We are going to construct a batch variational GP - using a `CholeskyVariationalDistribution` and a `VariationalStrategy`. Each of the batch dimensions is going to correspond to one of the outputs. In addition, we will wrap the variational strategy to make the output appear as a `MultitaskMultivariateNormal` distribution. Here are the changes that we'll need to make:\n",
"\n",
"1. Our inducing points will need to have shape `2 x m x 1` (where `m` is the number of inducing points). This ensures that we learn a different set of inducing points for each output dimension.\n",
"1. The `CholeskyVariationalDistribution`, mean module, and covariance modules will all need to include a `batch_shape=torch.Size([2])` argument. This ensures that we learn a different set of variational parameters and hyperparameters for each output dimension.\n",
"1. Our inducing points will need to have shape `4 x m x 1` (where `m` is the number of inducing points). This ensures that we learn a different set of inducing points for each output dimension.\n",
"1. The `CholeskyVariationalDistribution`, mean module, and covariance modules will all need to include a `batch_shape=torch.Size([4])` argument. This ensures that we learn a different set of variational parameters and hyperparameters for each output dimension.\n",
"1. The `VariationalStrategy` object should be wrapped by a variational strategy that handles multitask models. We describe them below:\n",
"\n",
"\n",
Expand All @@ -97,7 +97,7 @@
"num_tasks = 4\n",
"\n",
"class MultitaskGPModel(gpytorch.models.ApproximateGP):\n",
" def __init__(self):\n",
" def __init__(self, num_latents, num_tasks):\n",
" # Let's use a different set of inducing points for each latent function\n",
" inducing_points = torch.rand(num_latents, 16, 1)\n",
" \n",
Expand All @@ -113,8 +113,8 @@
" gpytorch.variational.VariationalStrategy(\n",
" self, inducing_points, variational_distribution, learn_inducing_locations=True\n",
" ),\n",
" num_tasks=4,\n",
" num_latents=3,\n",
" num_tasks=num_tasks,\n",
" num_latents=num_latents,\n",
" latent_dim=-1\n",
" )\n",
" \n",
Expand All @@ -136,7 +136,7 @@
" return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)\n",
"\n",
"\n",
"model = MultitaskGPModel()\n",
"model = MultitaskGPModel(num_latents, num_tasks)\n",
"likelihood = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=num_tasks)"
]
},
Expand Down Expand Up @@ -181,7 +181,7 @@
"outputs": [],
"source": [
"class IndependentMultitaskGPModel(gpytorch.models.ApproximateGP):\n",
" def __init__(self):\n",
" def __init__(self, num_tasks):\n",
" # Let's use a different set of inducing points for each task\n",
" inducing_points = torch.rand(num_tasks, 16, 1)\n",
" \n",
Expand All @@ -195,7 +195,7 @@
" gpytorch.variational.VariationalStrategy(\n",
" self, inducing_points, variational_distribution, learn_inducing_locations=True\n",
" ),\n",
" num_tasks=4,\n",
" num_tasks=num_tasks,\n",
" )\n",
" \n",
" super().__init__(variational_strategy)\n",
Expand Down

0 comments on commit 5c7986e

Please sign in to comment.