You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the Children struct contained in an InternalNode pre-allocates a [Option<Child>; 16] on creation. This allows extremely efficient insertion and deletion of children, but it also incurs significant memory overhead. If desired, we can trade a modest increase in CPU load for a significant decrease in memory footprint by using a Vec<Child> as the backing store at node creation, and switching to a [Option<Child>; 16] when the node grows too full.
structChildren{inner:ChildrenInner}ChildrenInner{/// Get and insert are constant timePreallocated(Box<[Option<Child>;16]>),/// Maintains a sorted list of the node's existing children. /// Get and insert are linear time, but N is very smallDynamic(Vec<(Nibble,Child)>)}
If desired, we can reduce the space overhead of the Dynamic inner struct even further by maintaining a bitmap that shows which children are present, instead of directly storing the Nibble. So, the vector [(0, child1), (4, child2), (8, child3)] would be compressed to { 0b0000_0001_0001_0001, [child1, child2, child] }.
implChildren{pubfnchild(&self,mutnibble:u8) -> Option<&Child>{// nibble must be a nibble (less than 16)if nibble >= 16{returnNone;}matchself{
...Self::Dynamic{ which_children, children } => {// the child exists if which_children[nibble] is set (thinking about it// as an array of bits)let exists = 1 == ((self.which_children << nibble)&1);if !exists {returnNone;}// the index of the child array is the number of ones *leading up to*// the nibble we're trying to accesslet index = (self.which_children >> (16 - nibble)).count_ones();Some(&self.children[index asusize])}}}}
Links
See here for the previous discussion on Github: #75 (comment)
I think this is relatively low-priority, because as far as I can tell, there are not likely to be many Nodes in memory at any single point in time. Indeed, with the static size, one optimization that could be performed is reusing that allocation if there is ever only a single Node being processed: rather than dropping the old Node and allocating a new one, we could deserialize the next read into the allocation of the existing Node. But that too seems like premature optimization, unless we can see that there is a performance hit measurably traceable to the repeated allocation.
Overview
Currently, the
Children
struct contained in anInternalNode
pre-allocates a[Option<Child>; 16]
on creation. This allows extremely efficient insertion and deletion of children, but it also incurs significant memory overhead. If desired, we can trade a modest increase in CPU load for a significant decrease in memory footprint by using aVec<Child>
as the backing store at node creation, and switching to a[Option<Child>; 16]
when the node grows too full.If desired, we can reduce the space overhead of the
Dynamic
inner struct even further by maintaining a bitmap that shows which children are present, instead of directly storing theNibble
. So, the vector [(0, child1), (4, child2), (8, child3)] would be compressed to { 0b0000_0001_0001_0001, [child1, child2, child] }.Links
See here for the previous discussion on Github: #75 (comment)
See here for the full discussion on Discord: https://discord.com/channels/824484045370818580/929998533711519755/1074613258595598336.
Credit
Ideas and example code originated with @plaidfinch, but have been adapted for this format. Any mistakes are my own.
The text was updated successfully, but these errors were encountered: