You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(Based on a comment from when I presented this result at EPFL.) The generated code is very GPU-friendly. We could generate code also for for PyTorch.
In fact the Bader, Blanes, Casas (in our code graph_bbc) method is currently default in pytorch matrix_exp, and we can do a lot better than the BBC-method so there is certainly room for improvement. It is implemented in the c++ code: mexp_impl and mexp:
(Based on a comment from when I presented this result at EPFL.) The generated code is very GPU-friendly. We could generate code also for for PyTorch.
In fact the Bader, Blanes, Casas (in our code
graph_bbc
) method is currently default in pytorchmatrix_exp
, and we can do a lot better than the BBC-method so there is certainly room for improvement. It is implemented in the c++ code:mexp_impl
andmexp
:https://github.com/pytorch/pytorch/blob/f4dd88489a77ff9b300bf6f9b34c233ca82f76d7/aten/src/ATen/native/LinearAlgebra.cpp#L2111
https://github.com/pytorch/pytorch/blob/f4dd88489a77ff9b300bf6f9b34c233ca82f76d7/aten/src/ATen/native/LinearAlgebra.cpp#L2185
and lower level code
compute_T1
... :https://github.com/pytorch/pytorch/blob/f4dd88489a77ff9b300bf6f9b34c233ca82f76d7/aten/src/ATen/native/LinearAlgebra.cpp#L1855
I would suggest we add a language with a variant symbol
If
variant=:plain
it will generate self-contained C++ code. Ifvariant=:pytorch
it generates pytorch-compatible code.The text was updated successfully, but these errors were encountered: