0.1.3 Release
TorchDrug 0.1.3 release introduces new features like W&B intergration and index reference. It also provides new functions and metrics for common development need. Note 0.1.3 has some compatibility changes and be careful when you update your TorchDrug from an older version.
- W&B Integration
- Index Reference
- New Functions
- New Metrics
- Improvements
- Bug Fixes
- Compatibility Changes
W&B Integration
Tracking experiment progress is one of the most important demand from ML researchers and developers. For TorchDrug users, we provide a native integration of W&B platform. By adding only one argument in core.Engine
, TorchDrug will automatically copy every hyperparameter and training log to your W&B database (thanks to @manangoel99).
solver = core.Engine(task, train_set, valid_set, test_set, optimizer, logger="wandb")
Now you can track your training and validation performance in your browser, and compare them across different experiments.
Index Reference
Maintaining node and edge attributes could be painful when one applies a lot of transformations to a graph. TorchDrug aims to eliminate such tedious steps by registering custom attributes. This update extends the capacity of custom attributes to index reference. That means, we allow attributes to refer to indexes of nodes, edges or graphs, and they will be automatically maintained in any graph operation.
To use index reference, simply add a context manager when we define the attributes.
with graph.edge(), graph.edge_reference():
graph.inv_edge_index = torch.tensor(inv_edge_index)
Foor more details on index reference, please take a look at our notes. Typical use cases include
- A pointer to the inverse edge of each edge.
- A pointer to the parent node of each node in a tree.
- A pointer to the incoming tree edge of each node in a DFS.
Let us know if you find more interesting usage of index reference!
New Functions
Message passing over line graphs is getting more and more popular in the recent years. This version provides data.Graph.line_graph
to efficiently construct line graphs on GPUs. It supports both a single graph or a batch of graphs.
We are constantly focusing on better batching of irregular structures, and the variadic functions in TorchDrug are an efficient way to process batch of variadic-sized tensors without padding. This update introduces 3 new variadic functions.
variadic_meshgrid
generates a meshgrid from two variadic tensors. Useful for implementing pairwise operations.variadic_to_padded
converts a variadic tensor to a padded tensor.padded_to_variadic
converts a padded tensor to a variadic tensor.
New Metrics
New metrics include accuracy
, matthews_corrcoef
, pearsonr
, spearmanr
. All the metrics are the same as their counterparts in scipy, but they are implemented in PyTorch and support auto differentiation.
Improvements
- Add
data.Graph.to
(#70, thanks to @cthoyt) - Extend
tasks.SynthonCompletion
for arbitrary atom features (#62) - Speed up lazy data loading (#58, thanks to @wconnell)
- Speed up rspmm cuda kernels
- Add docker support
- Add more documentation for
data.Graph
anddata.Molecule
Bug Fixes
- Fix computation of output dimension in several GNNs (#92, thanks to @kanojikajino)
- Fix
data.PackedGraph.__getitem__
when the batch is empty (#83, thanks to @jannisborn) - Fix patched modules for PyTorch>=1.6.0 (#77)
- Fix
make_configurable
fortorch.utils.data
(#85) - Fix
multi_slice_mask
,variadic_max
for multi-dimensional input - Fix
variadic_topk
for input containing infinite values
Compatibility Changes
TorchDrug now supports Python 3.7/3.8/3.9. Starting from this version, TorchDrug requires a minimal PyTorch version of 1.8.0 and a minimal RDKit version of 2020.09.
Argument node_feature
and edge_feature
are renamed to atom_feature
and bond_feature
in data.Molecule.from_smiles
and data.Molecule.from_molecule
. The old interface is still supported with deprecated warnings.