Before you begin, ensure that you have Python installed on your system. pinn_clusters
(https://github.com/YaoGroup/pinn_clusters) is compatible with Python 3.x. To install pinn_clusters
, simply run the following command in your terminal:
python -m pip install " pinn_clusters @ git+https://github.com/YaoGroup/ice_1D_pinn.git"
To install the specific version used in the paper.
python -m pip install " pinn_clusters @ git+https://github.com/YaoGroup/ice_1D_pinn.git@379852f184dead0ef7f2f7e97bfbc9dc47e3407e"
After installing pinn_clusters
, you can import any function from the package directly into your Python script. Just like importing function from other Python libraries like TensorFlow, e.g. from tensorflow import keras
, you can importing a function from pinn_clusters
using
from pinn_clusters import your_function_here
Replace your_function_here with the specific function you wish to use from the pinn_clusters
package. For instance, to use the gamma_batch function, simply add this line to your code:
from pinn_clusters import gamma_batch
For usage on your local computer, we recommend the use of a conda environment. To install conda, please follow the instructions here.
Then, create a conda environment with Jupyter installed by running the command:
conda create --name pinn_test python=3.10 jupyter
Then, activate the new environment and start jupyter.
conda activate pinn_test
jupyter-lab
Then, open the example notebook within the jupyter-lab window.
Contains the loss functions used in the paper, namely SquareLoss
used for fixed collocation points, and SquareLossRandom
used for collocation resampling. Please refer to the final section of this README ("Code Implementation of Collocation Resampling") for a detailed explanation of how these two functions differ.
Both loss functions evaluate the predictive accuracy of the neural network after each iteration according to the characteristic objective function of PINN, which we call
where we introduce an additional hyperparameter _formulations.py
for the implementation of these equations in our codes.
An instance of the SquareLoss
function is initialized by the following code:
loss = SquareLoss(equations=physics_equations, equations_data=data_equations, gamma=gamma)
where
-
equations
: An iterable of callables with the signaturefunction(x, neuralnet)
corresponding to the governing physics equations. To enforce 1D SSA, we passinverse_1st_order_equations
imported from_formulations.py
. -
equations_data
: An iterable of callables with the signaturefunction(x, neuralnet)
corresponding to the governing physics equations. We usedata_equations
imported from_formulations.py
. -
gamma
(float): the value of$\gamma$ with which to evaluate the objective function$J(\Theta)$ .
SquareLossRandom
is initialized with the same arguments.
Implements data_equations
and inverse_1st_order_equations
, which serve as the two components of the cost function. Also implemented are the following helper functions:
-
analytic_h_constantB(x)
: analytic$h(x)$ solution for constant$B(x)$ profile. -
analytic_u_constantB(x)
: analytic$u(x)$ solution for constant$B(x)$ profile. -
get_collocation_points(x_train, xmin: float, xmax: float, N_t: int)
: generates a single set of collocation points.
Defines the values of the physical constants that appear in the physics-enforcing equations.
Helper functions for neural network initialization.
Implements Adam and L-BFGS optimizers.
Training can be switched between using fixed collocation points and collocation resampling by switching the loss function used during training. The loss function evaluated by a given optimizer is specified during the initialization of the optimizer. Use the SquareLoss
loss function when using fixed collocation points, and SquareLossRandom
for random collocation resampling (see lines 77-100 in 'pinn_trials.py').
Comparing the SquareLoss
and SquareLossRandom
functions in 'loss.py', the main difference between the two functions is in the __call__
method. For SquareLossRandom
, we add a few extra lines at the beginning of the __call__
method (lines 54-61):
def __call__(self, x_eqn, data_pts, net) -> Dict[str, tf.Tensor]:
xmin = 0.0
xmax = 1.0
N_t = 1001
_data_type = tf.float64
collocation_pts = xmin + (xmax - xmin) * self.col_gen.uniform(shape = [N_t])
collocation_pts = collocation_pts**3
where self.col_gen
is a stateful random generator defined in the __init__
method (line 52):
self.col_gen = tf.random.get_global_generator()
Thus, the SquareLossRandom
function generates a new set of collocation points every time it is called, i.e. at every iteration.
Important Note: It is essential to use a stateful random number generator such as tf.random.Generator()
to ensure that the collocation points are resampled after each iteration. Using a stateless random generator (such as
those provided in the numpy.random
module, or the lhs
generator used in our codes for fixed collocation point generation) will not allow the collocation points to be updated in a TensorFlow training loop, causing the loss function to behave identically to training with fixed collocation points.
Yunona Iwasaki and Ching-Yao Lai. One-dimensional ice shelf hardness inversion: Clustering behavior and collocation resampling in physics-informed neural networks. Journal of Computational Physics, Volume 492, 2023, 112435, ISSN 0021-9991, https://doi.org/10.1016/j.jcp.2023.112435.
BibTex:
@article{IWASAKI2023112435,
title = {One-dimensional ice shelf hardness inversion: Clustering behavior and collocation resampling in physics- informed neural networks},
journal = {Journal of Computational Physics},
volume = {492},
pages = {112435},
year = {2023},
issn = {0021-9991},
doi = {https://doi.org/10.1016/j.jcp.2023.112435},
url = {https://www.sciencedirect.com/science/article/pii/S0021999123005302},
author = {Yunona Iwasaki and Ching-Yao Lai},
keywords = {Physics-informed neural networks, Scientific machine learning, Ice dynamics, Geophysical fluid dynamics, Nonlinear dynamics, Inverse problems},
}