Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overload elementary operations #139

Merged
merged 51 commits into from
Dec 23, 2024
Merged

Overload elementary operations #139

merged 51 commits into from
Dec 23, 2024

Conversation

jwallwork23
Copy link
Contributor

@jwallwork23 jwallwork23 commented Jun 21, 2024

Closes #138.

Note that about 1,000 lines of changes are automatically generated from the ftorch.fypp changes. So this PR isn't as enormous as it looks.

@jwallwork23 jwallwork23 added the enhancement New feature or request label Jun 21, 2024
@jwallwork23 jwallwork23 self-assigned this Jun 21, 2024
@jatkinson1000
Copy link
Member

This seems sensible, and impressive 😄
What is the argument for making torch_tensor_from_array() no longer return a value but instead operate internally?
We can chat over lunch.

@jwallwork23
Copy link
Contributor Author

This seems sensible, and impressive 😄 What is the argument for making torch_tensor_from_array() no longer return a value but instead operate internally? We can chat over lunch.

Thanks!

If you overload the assignment operator then tensor = torch_tensor_from_array(...) will call torch_tensor_from_array with the provided arguments and then call torch_tensor_assign, copying that into tensor. If this and other constructor-type procedures go from being functions to subroutines then we avoid unnecessary data copies.

@jwallwork23
Copy link
Contributor Author

As discussed in today's meeting, we should check what happens if an operator that hasn't been implemented is used. I switched from Q = 3 * a ** 3 - b ** 2 to Q = 3 * a ** b - b ** 2, i.e., take a tensor to the power of another tensor, rather than to the power of a scalar. I get the following compile error, which seems clear enough:

Scanning dependencies of target autograd
[ 91%] Building Fortran object test/examples/6_Autograd/CMakeFiles/autograd.dir/autograd.f90.o
/home/joe/software/FTorch/src/test/examples/6_Autograd/autograd.f90:54:14:

   54 |   Q = 3 * a ** b - b ** 2
      |              1
Error: Unexpected derived-type entities in binary intrinsic numeric operator ‘**’ at (1)

src/ftorch.fypp Outdated
Comment on lines 130 to 140
interface operator (*)
module procedure torch_tensor_multiply
#:for PREC in PRECISIONS
module procedure torch_tensor_premultiply_${PREC}$
module procedure torch_tensor_postmultiply_${PREC}$
#:endfor
end interface

interface operator (/)
module procedure torch_tensor_divide
end interface
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might as well pre/post-divide by scalar, too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Post-divide implemented in b804d54. Left pre-division (i.e., scalar divided by tensor) for now as I don't think it's something people use often and I need to check what that actually does on the Torch side.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In f0ed978 I reworked the autograd test so that it tests multiply and divide without changing expected values.

@jwallwork23
Copy link
Contributor Author

Merged in the Windows fix from #207.

Copy link
Member

@TomMelt TomMelt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thnaks @jwallwork23 , this looks great. Just a single comment really and I think it should possibly be a separate issue anyway. lmk what you think. I am happy to merge as is.

(edit: obvs once you address the merge conflict)

examples/CMakeLists.txt Outdated Show resolved Hide resolved
pages/autograd.md Show resolved Hide resolved
Copy link
Member

@jatkinson1000 jatkinson1000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jwallwork23
This is a huge step, and really great work.

I have left a couple of queries/suggestions, but the code itself looks good.
I will approve, so feel free to look at my comments as you please and merge.

examples/6_Autograd/autograd.f90 Show resolved Hide resolved
pages/autograd.md Outdated Show resolved Hide resolved
pages/developer.md Outdated Show resolved Hide resolved
pages/examples.md Show resolved Hide resolved
src/ctorch.cpp Outdated Show resolved Hide resolved
jwallwork23 and others added 2 commits December 20, 2024 14:56
Co-authored-by: Jack Atkinson <109271713+jatkinson1000@users.noreply.github.com>
@jwallwork23 jwallwork23 merged commit 1376dcb into main Dec 23, 2024
6 checks passed
@jwallwork23 jwallwork23 deleted the autograd branch December 23, 2024 09:22
@jwallwork23 jwallwork23 mentioned this pull request Jan 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Overload elementary operations for torch_tensor
3 participants