-
Notifications
You must be signed in to change notification settings - Fork 562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add batch_shape property to GP model class #2307
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This enables the use of `SingleTaskVariationalGP` with certain botorch features (e.g. with entropy-based acquistion functions as requested in pytorch#1795). This is a bit of a band-aid, the proper thing to do here is to fix up the PR upstreaming this to gpytorch (cornellius-gp/gpytorch#2307) to enable support for `batch_shape` on all approximate gpytorch models, and then just call that on the `model` in `ApproximateGPyTorchModel`.
Summary: This enables the use of `SingleTaskVariationalGP` with certain botorch features (e.g. with entropy-based acquistion functions as requested in #1795). This is a bit of a band-aid, the proper thing to do here is to fix up the PR upstreaming this to gpytorch (cornellius-gp/gpytorch#2307) to enable support for `batch_shape` on all approximate gpytorch models, and then just call that on the `model` in `ApproximateGPyTorchModel`. Pull Request resolved: #1799 Reviewed By: dme65 Differential Revision: D45107761 Pulled By: Balandat fbshipit-source-id: 44f1893d4ab915f744c17dfd5da4efd4923c66ae
Implements #2301. TODO: Verify compatibility with the botorch setup of other models
" \n", | ||
" # Locations Z_{d} corresponding to u_{d}, they can be randomly initialized or \n", | ||
" # regularly placed with shape (D x n_inducing x latent_dim).\n", | ||
" self.inducing_inputs = torch.randn(data_dim, n_inducing, latent_dim)\n", | ||
" \n", | ||
" # Sparse Variational Formulation (inducing variables initialised as randn)\n", | ||
" q_u = CholeskyVariationalDistribution(n_inducing, batch_shape=self.batch_shape) \n", | ||
" q_u = CholeskyVariationalDistribution(n_inducing, batch_shape=torch.Size([data_dim])) \n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why aren't we using the self.batch_shape
property here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Balandat should I merge this PR? Or do you still edits you want to make?
Let's hold off. I have been running into some issues with this approach (caused by the fact that in botorch the batch shape is not necessarily the same as the gpytorch-internal batch shape. So right now this would cause a bunch of headaches and bugs on our end rather than only resolving some. We'll have to rethink if this is the right approach. |
Implements #2301.