Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add mwe for fypp preprocessor #60

Merged
merged 20 commits into from
Nov 21, 2023
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
eac2260
add mwe for fypp preprocessor
TomMelt Nov 3, 2023
e94e0c9
working example of _fortran only_ interface
TomMelt Nov 17, 2023
2f08e98
Add suffix to fypp functions to show dimensionality.
jatkinson1000 Nov 18, 2023
2c471db
Tidy some of fypp file and add some documentation.
jatkinson1000 Nov 18, 2023
c2e50ef
Tidy enumerator docs
jatkinson1000 Nov 18, 2023
e89507f
Add pre-commit hook file to check fypp validity.
jatkinson1000 Nov 18, 2023
424b3ed
Bring fypp and f90 in line and tidy docs.
jatkinson1000 Nov 18, 2023
7fe963b
Add fypp workflow attempt.
jatkinson1000 Nov 18, 2023
1c5b814
Allow fypp workflow to run hook using chmod +x.
jatkinson1000 Nov 18, 2023
9d4b3d7
fypp runs on all pushes, so no need to duplicate on PRs.
jatkinson1000 Nov 18, 2023
0cc57b6
Modify fypp workflow to fail if ftorch.f90 does not match expected re…
jatkinson1000 Nov 18, 2023
f53cbc7
Update README example consistent with torch_tensor_from_array approach.
jatkinson1000 Nov 18, 2023
a55cdc3
Update example 1 to use torch_tensor_from_array.
jatkinson1000 Nov 19, 2023
b3c479f
Correction to example Fortran in README to provide missing output par…
jatkinson1000 Nov 20, 2023
6616c35
Update ftorch to standardise argument order.
jatkinson1000 Nov 21, 2023
8f8345d
Remove tensor_tests as outdated and to be replaced by cgdrag examples…
jatkinson1000 Nov 21, 2023
656fa9d
Update pre-commit hook to be bash.
jatkinson1000 Nov 21, 2023
703bc5a
Create a githooks directory for pre-commit hook.
jatkinson1000 Nov 21, 2023
d74e650
Create a githooks directory for pre-commit hook
jatkinson1000 Nov 21, 2023
7943978
update order of args for torch_tensor_from_blob
TomMelt Nov 21, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions .github/workflows/fypp.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
name: fypp-checks

on:
# run on every push
push:

jobs:
various:
name: FYPP checks - runs check on fypp and f90 files
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.11"
- run: pip install fypp

- name: Check fypp matches f90
run: |
fypp src/ftorch.fypp src/temp.f90_temp
if ! diff -q src/ftorch.f90 src/temp.f90_temp; then
echo "Error: The code in ftorch.f90 does not match that expected from ftorch.fypp."
echo "Please re-run fypp on ftorch.fypp to ensure consistency and re-commit."
exit 1
else
exit 0
fi
36 changes: 16 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,11 +128,10 @@ To use the trained Torch model from within Fortran we need to import the `ftorch
A very simple example is given below.
For more detailed documentation please consult the API documentation, source code, and examples.

This minimal snippet loads a saved Torch model, creates inputs consisting of two `10x10` matrices (one of ones, and one of zeros), and runs the model to infer output.
This minimal snippet loads a saved Torch model, creates an input consisting of a `10x10` matrix of ones, and runs the model to infer output.
This is illustrative only, and we recommend following the [examples](examples/) before writing your own code to explore more features.

```fortran
! Import any C bindings as required for this code
use, intrinsic :: iso_c_binding, only: c_int, c_int64_t, c_loc
! Import library for interfacing with PyTorch
use ftorch

Expand All @@ -141,34 +140,32 @@ implicit none
! Generate an object to hold the Torch model
type(torch_module) :: model

! Set up types of input and output data and the interface with C
integer(c_int), parameter :: dims_input = 2
integer(c_int64_t) :: shape_input(dims_input)
integer(c_int), parameter :: n_inputs = 2
! Set up types of input and output data
integer, parameter :: n_inputs = 1
type(torch_tensor), dimension(n_inputs) :: model_input_arr
integer(c_int), parameter :: dims_output = 1
integer(c_int64_t) :: shape_output(dims_output)
type(torch_tensor) :: model_output

! Set up the model inputs as Fortran arrays
real, dimension(10,10), target :: input_1, input_2
! Set up the model input and output as Fortran arrays
real, dimension(10,10), target :: input
real, dimension(5), target :: output

! Set up number of dimensions of input tensor and axis order
integer, parameter :: in_dims = 2
integer :: in_layout(in_dims) = [1,2]
integer, parameter :: out_dims = 1
integer :: out_layout(out_dims) = [1]

! Initialise the Torch model to be used
model = torch_module_load("/path/to/saved/model.pt")

! Initialise the inputs as Fortran
input_1 = 0.0
input_2 = 1.0
! Initialise the inputs as Fortran array of ones
input = 1.0

! Wrap Fortran data as no-copy Torch Tensors
! There may well be some reshaping required depending on the
! structure of the model which is not covered here (see examples)
shape_input = (/10, 10/)
shape_output = (/5/)
model_input_arr(1) = torch_tensor_from_blob(c_loc(input_1), dims_input, shape_input, torch_kFloat64, torch_kCPU)
model_input_arr(2) = torch_tensor_from_blob(c_loc(input_2), dims_input, shape_input, torch_kFloat64, torch_kCPU)
model_output = torch_tensor_from_blob(c_loc(output), dims_output, shape_output, torch_kFloat64, torch_kCPU)
model_input_arr(1) = torch_tensor_from_array(input, in_layout, torch_kCPU)
model_output = torch_tensor_from_array(output, out_layout, torch_kCPU)

! Run model and Infer
! Again, there may be some reshaping required depending on model design
Expand All @@ -180,7 +177,6 @@ write(*,*) output
! Clean up
call torch_module_delete(model)
call torch_tensor_delete(model_input_arr(1))
call torch_tensor_delete(model_input_arr(2))
call torch_tensor_delete(model_output)
```

Expand Down
40 changes: 18 additions & 22 deletions examples/1_SimpleNet/simplenet_infer_fortran.f90
Original file line number Diff line number Diff line change
@@ -1,47 +1,45 @@
program inference

! Imports primitives used to interface with C
use, intrinsic :: iso_c_binding, only: c_int64_t, c_float, c_char, c_ptr, c_loc
! Import precision info from iso
use, intrinsic :: iso_fortran_env, only : sp => real32

! Import our library for interfacing with PyTorch
use ftorch

implicit none


! Set precision for reals
integer, parameter :: wp = sp

integer :: num_args, ix
character(len=128), dimension(:), allocatable :: args

! Set up types of input and output data and the interface with C
! Set up Fortran data structures
real(wp), dimension(5), target :: in_data
real(wp), dimension(5), target :: out_data
integer, parameter :: n_inputs = 1
integer :: tensor_layout(1) = [1]

! Set up Torch data structures
type(torch_module) :: model
type(torch_tensor), dimension(1) :: in_tensor
type(torch_tensor) :: out_tensor

real(c_float), dimension(:), allocatable, target :: in_data
integer(c_int), parameter :: n_inputs = 1
real(c_float), dimension(:), allocatable, target :: out_data

integer(c_int), parameter :: tensor_dims = 1
integer(c_int64_t) :: tensor_shape(tensor_dims) = [5]
integer(c_int) :: tensor_layout(tensor_dims) = [1]

! Get TorchScript model file as a command line argument
num_args = command_argument_count()
allocate(args(num_args))
do ix = 1, num_args
call get_command_argument(ix,args(ix))
end do

! Allocate one-dimensional input/output arrays, based on multiplication of all input/output dimension sizes
allocate(in_data(tensor_shape(1)))
allocate(out_data(tensor_shape(1)))

! Initialise data
in_data = [0.0, 1.0, 2.0, 3.0, 4.0]

! Create input/output tensors from the above arrays
in_tensor(1) = torch_tensor_from_blob(c_loc(in_data), tensor_dims, tensor_shape, torch_kFloat32, torch_kCPU, tensor_layout)
out_tensor = torch_tensor_from_blob(c_loc(out_data), tensor_dims, tensor_shape, torch_kFloat32, torch_kCPU, tensor_layout)
! Create Torch input/output tensors from the above arrays
in_tensor(1) = torch_tensor_from_array(in_data, tensor_layout, torch_kCPU)
out_tensor = torch_tensor_from_array(out_data, tensor_layout, torch_kCPU)

! Load ML model (edit this line to use different models)
! Load ML model
model = torch_module_load(args(1))

! Infer
Expand All @@ -52,7 +50,5 @@ program inference
call torch_module_delete(model)
call torch_tensor_delete(in_tensor(1))
call torch_tensor_delete(out_tensor)
deallocate(in_data)
deallocate(out_data)

end program inference
39 changes: 17 additions & 22 deletions examples/2_ResNet18/resnet_infer_fortran.f90
Original file line number Diff line number Diff line change
@@ -1,18 +1,12 @@
program inference

! Imports primitives used to interface with C
use, intrinsic :: iso_c_binding, only: c_sp=>c_float, c_dp=>c_double, c_int64_t, c_loc
use, intrinsic :: iso_fortran_env, only : sp => real32, dp => real64
use, intrinsic :: iso_fortran_env, only : sp => real32
! Import our library for interfacing with PyTorch
use :: ftorch

implicit none

! Define working precision for C primitives
! Precision must match `wp` in resnet18.py and `wp_torch` in pt2ts.py
integer, parameter :: c_wp = c_sp
integer, parameter :: wp = sp
integer, parameter :: torch_wp = torch_kFloat32

call main()

Expand All @@ -25,21 +19,21 @@ subroutine main()
integer :: num_args, ix
character(len=128), dimension(:), allocatable :: args

! Set up types of input and output data and the interface with C
! Set up types of input and output data
type(torch_module) :: model
type(torch_tensor), dimension(1) :: in_tensor
type(torch_tensor) :: out_tensor

real(c_wp), dimension(:,:,:,:), allocatable, target :: in_data
integer(c_int), parameter :: n_inputs = 1
real(c_wp), dimension(:,:), allocatable, target :: out_data
real(wp), dimension(:,:,:,:), allocatable, target :: in_data
real(wp), dimension(:,:), allocatable, target :: out_data
integer, parameter :: n_inputs = 1

integer(c_int), parameter :: in_dims = 4
integer(c_int64_t) :: in_shape(in_dims) = [1, 3, 224, 224]
integer(c_int) :: in_layout(in_dims) = [1,2,3,4]
integer(c_int), parameter :: out_dims = 2
integer(c_int64_t) :: out_shape(out_dims) = [1, 1000]
integer(c_int) :: out_layout(out_dims) = [1,2]
integer, parameter :: in_dims = 4
integer :: in_shape(in_dims) = [1, 3, 224, 224]
integer :: in_layout(in_dims) = [1,2,3,4]
integer, parameter :: out_dims = 2
integer :: out_shape(out_dims) = [1, 1000]
integer :: out_layout(out_dims) = [1,2]

! Binary file containing input tensor
character(len=*), parameter :: filename = '../data/image_tensor.dat'
Expand Down Expand Up @@ -72,8 +66,9 @@ subroutine main()
call load_data(filename, tensor_length, in_data)

! Create input/output tensors from the above arrays
in_tensor(1) = torch_tensor_from_blob(c_loc(in_data), in_dims, in_shape, torch_wp, torch_kCPU, in_layout)
out_tensor = torch_tensor_from_blob(c_loc(out_data), out_dims, out_shape, torch_wp, torch_kCPU, out_layout)
in_tensor(1) = torch_tensor_from_array(in_data, in_layout, torch_kCPU)

out_tensor = torch_tensor_from_array(out_data, out_layout, torch_kCPU)

! Load ML model (edit this line to use different models)
model = torch_module_load(args(1))
Expand Down Expand Up @@ -113,9 +108,9 @@ subroutine load_data(filename, tensor_length, in_data)

character(len=*), intent(in) :: filename
integer, intent(in) :: tensor_length
real(c_wp), dimension(:,:,:,:), intent(out) :: in_data
real(wp), dimension(:,:,:,:), intent(out) :: in_data

real(c_wp) :: flat_data(tensor_length)
real(wp) :: flat_data(tensor_length)
integer :: ios
character(len=100) :: ioerrmsg

Expand Down Expand Up @@ -166,7 +161,7 @@ subroutine calc_probs(out_data, probabilities)

implicit none

real(c_wp), dimension(:,:), intent(in) :: out_data
real(wp), dimension(:,:), intent(in) :: out_data
real(wp), dimension(:,:), intent(out) :: probabilities
real(wp) :: prob_sum

Expand Down
69 changes: 69 additions & 0 deletions pre-commit
jatkinson1000 marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#!/bin/sh
#
# A hook script to verify what is about to be committed.
# Called by "git commit" with no arguments. The hook should
# exit with non-zero status after issuing an appropriate message if
# it wants to stop the commit.

# Fail immediately at first issue with the relevant exit status.
set -eo pipefail
jatkinson1000 marked this conversation as resolved.
Show resolved Hide resolved

# ===================================================================

if git rev-parse --verify HEAD >/dev/null 2>&1
then
against=HEAD
else
# Initial commit: diff against an empty tree object
against=$(git hash-object -t tree /dev/null)
fi

# ===================================================================

# Check that ftorch.90 is not modified and staged alone.
git diff --cached --name-only | if grep --quiet "ftorch.f90"; then
git diff --cached --name-only | if ! grep --quiet "ftorch.fypp"; then
cat <<\EOF
Error: File ftorch.f90 has been modified and staged without ftorch.fypp being changed.
ftorch.90 should be generated from ftorch.fypp using fypp.
Please restore ftorch.f90 and make your modifications to ftorch.fypp instead.
EOF
exit 1
fi
fi

# Check to see if ftorch.fypp has been modified AND is staged.
git diff --cached --name-only | if grep --quiet "ftorch.fypp"; then

# Check that ftorch.90 is also modified and staged.
git diff --cached --name-only | if ! grep --quiet "ftorch.f90"; then
cat <<\EOF
Error: File ftorch.fypp has been modified and staged, but ftorch.f90 has not.
ftorch.90 should be generated from ftorch.fypp and both committed together.
Please run fypp on ftorch.fypp to generate ftorch.f90 and commit together.
EOF
exit 1
else
# Check fypp available, and raise error and exit if not.
if ! command -v fypp &> /dev/null; then
cat <<\EOF
echo "Error: Could not find fypp to run on ftorch.fypp.
Please install fypp using "pip install fypp" and then try committing again.
EOF
exit 1
fi

# If fypp is available and both .f90 and .fypp staged, check they match.
fypp src/ftorch.fypp src/ftorch.f90_tmp
if ! diff -q "src/ftorch.f90" "src/ftorch.f90_tmp" &> /dev/null; then
rm src/ftorch.f90_tmp
cat <<\EOF
Error: The code in ftorch.f90 does not match that expected from ftorch.fypp.
Please re-run fypp on ftorch.fypp to ensure consistency before committing.
EOF
exit 1
else
rm src/ftorch.f90_tmp
fi
fi
fi
Loading