Tracking: Caching and Parallel Prewarming #13713
Labels
A-execution
Related to the Execution and EVM
A-trie
Related to Merkle Patricia Trie implementation
C-perf
A change motivated by improving speed, memory usage or disk footprint
C-tracking-issue
An issue that collects information about a broad development initiative
This task consists of two things:
State Caching
This requires adding a cache for state (accounts, storage, bytecode), and populating it based on any prefetching we do. Any prefetching is spawned before execution and happens concurrently. Once the cache is populated, any time execution must load an account from the database, it should first check the cache for any hits. It's important that execution only uses the cache for the first access of an account, to prevent race conditions.
Prewarming
Prewarming involves using heuristics to determine state that might be needed during execution, and fetch it before they would be needed. Parallel prewarming is one such heuristic, which executes every transaction in the block in parallel, using their naive execution to populate state caches. Nonce checks must be disabled, and some other checks may also be interesting to disable, for example balance checks.
Integration with sparse trie
This can be integrated with sparse trie by sending
PrefetchProofs
events to the state root task on cache miss.Intra-block caching
The first / most obvious benefit is to use these caches for intra-block caching. Once the block is done executing, the caches are invalidated because state changes would need to be applied to the cache.
Inter-block caching
Inter-block caching is slightly more complex than intra-block caching, because it would require applying state changes to the cache, and keeping one or more caches around for future blocks.
There are two ways of implementing this, in increasing complexity:
Proof caching
For intra-block caching, proof caching is not expected to have a performance impact unless the state root task were to fetch proofs for the same address or storage multiple times. However, for inter-block caching, caching proofs could be valuable, since the state root task for the next block may try to fetch a proof for an account that was accessed in a previous block.
Inter-block proof caching comes with a similar amount of complexity as inter-block state caching, since cached proofs would need to be updated to reflect new trie changes. This would actually be slightly more complex than inter-block state caching, because updating proofs (or multiproofs) is less trivial than updating state.
Tasks
The text was updated successfully, but these errors were encountered: