Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PDP Service Contract Performance #16

Open
ZenGround0 opened this issue Aug 15, 2024 · 3 comments
Open

PDP Service Contract Performance #16

ZenGround0 opened this issue Aug 15, 2024 · 3 comments

Comments

@ZenGround0
Copy link
Contributor

This is a placeholder issue for all speculative performance optimizations of the PDP service contract. We'll only do things that address measured bottlenecks (likely proving but possibly also the SumTree index and contract history recording).

Here are a few we can predict may be useful. Please add more ideas to the comments as you come up with them.

  • Batch the add and remove methods
  • SumTree performance improvements
    • track treeStart for efficient searching when front of proof set is garbage collected
    • store multiple SumTree entries in one word for better loading performance
    • change branching factor for shorter tree depth and to match word size for better loading performance
  • Change hot code paths to inline assembly, especially in proving code path
    • probably this is taken care of in merkle proof libraries that are very good
  • Do perf comparisons of merkle tree libraries
  • Consider changing PDP hash function for best perf
  • Recording history
    • Make assumptions about proving frequency to dramatically compress recorded bytes
    • Try a record keeping contract with a limited lifetime of events recorded. This keeps updating costs bounded rather than growing with map size
@anorth
Copy link
Collaborator

anorth commented Aug 19, 2024

I think we should batch add and remove from the start in any case. There are obvious overheads to be avoided by doing this.

@ZenGround0
Copy link
Contributor Author

Its worth investigating CID representation for efficiency. We could explore have separate mappings for cid hashes vs cid prefixes.
" // TODO: there is more to think about here.
// existing libraries (https://github.com/filecoin-project/filecoin-solidity/blob/master/contracts/v0.8/types/CommonTypes.sol#L91)
// don't give us any significant functionality so it makes sense to redeclare.
// There will be performance benefits both for storage and memory ops to using bytes32 but we should wait
// on measurment before making that decision. We could potentially use bytes32 hash and bytes8 for the prefix.
"

@ZenGround0
Copy link
Contributor Author

We can explore changing the shape of the key space to one flat space vs two hierarchical proofset => root id key spaces. Current hierarchical approach is speculatively better for isolation between different proofsets.
" // TODO: a single mapping with a pair key i.e. (uint256, uint256 )
// might be more efficient as it will half the hashing needed for access and set"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants