RE: New (better?) hash map technique in limit case.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On December 12, 2021 12:44 PM, Philip Oakley:
> To: Glen Choo <chooglen@xxxxxxxxxx>; Git List <git@xxxxxxxxxxxxxxx>
> Subject: Re: New (better?) hash map technique in limit case.
> 
> Hi Glen,
> On 10/12/2021 22:52, Glen Choo wrote:
> > Philip Oakley <philipoakley@iee.email> writes:
> >
> >> Recently I saw a report [1] on a new theoretical result about how to
> >> manage hash maps which get nearly 'full', which beats Knuth's limit
> >> formula. The full paper is at [2]
> >>
> >> As I understand it, the method adds the gravestone entries early
> >> during has collisions to avoid clumping of such collision insertions,
> >> rather than always having to enter the collision list at the end.
> >> This keeps the available slots relatively randomly spaced.
> >>
> >> It feels like the old random bus arrival problem where the average
> >> wait for next bus is identical to the average time since the last
> >> bust, which is the same as the average bus interval (thus 1 + 1 = 1),
> >> and the technique maintains that advantageous perception.
> >>
> >> Given Git's use of hashes, it sounds like it could have uses,
> >> assuming the theory pans out. I've not yet gone through the paper
> >> itself [2] but hope springs eternal.
> >>
> >> Philip
> >>
> >> [1]
> >> S. Nadis and M. I. of Technology, “Theoretical breakthrough could
> >> boost data storage.”
> >> https://techxplore.com/news/2021-11-theoretical-breakthrough-boost-
> st
> >> orage.html
> >> (accessed Nov. 18, 2021).
> >>
> >> [2]
> >> M. A. Bender, B. C. Kuszmaul, and W. Kuszmaul, “Linear Probing
> >> Revisited: Tombstones Mark the Death of Primary Clustering,”
> >> arXiv:2107.01250 [cs, math], Jul. 2021, Accessed: Nov. 18, 2021.
> >> [Online]. Available: http://arxiv.org/abs/2107.01250
> > Very interesting, thanks for sharing. I haven't read the full paper
> > either, but this is an interesting result.
> >
> > It seems that this result is limited to hashmaps with a approximately
> > equal number of insertions and deletions..
> >
> > From [1]
> >
> >   They found that for applications where the number of insertions and
> >   deletions stays about the same—and the amount of data added is roughly
> >   equal to that removed—linear-probing hash tables can operate at high
> >   storage capacities without sacrificing speed.apacities without
> >   sacrificing speed.
> >
> > and [2]
> >
> >   ...We then turn our attention to sequences of operations that contain
> >   deletions, and show that the tombstones left behind by those deletions
> >   have a primary-anti-clustering effect, that is, they have a tendency
> >   to speed up future insertions
> >
> > Do we have any such use cases?
> I know that we use hash maps, but haven't followed there actual usage in
> various optimisations.
> 
> Obviously we use hash naming of objects but that's generally a red-herring, I
> think, unless we are over-abbreviating the hash so that it's no longer unique
> (which could be happening somewhere).
> 
> I suspect that some of the hosting providers may be more interested from a
> File system perspective, as I think we just pass the object store problems to
> the FS. Then again, all the mono-repo and partial checkout corporate users
> are likely to be interested, especially if this unblocks some historical
> misunderstanding about the limits and how to handle them.

If we are going to change data structures for object hashes, it might be better to use a compressed trie data structure (trie is not a typo for tree). This can be mapped easily to the current FS structure in .git/objects - in fact, .git/objects is mappable to a two-level compressed trie with a top level of 2 characters. Extending the data structure model both in memory and on the file system to become multi-level instead of just 2 levels should not be particularly onerous and has a O(1) search speed that, at worst, is the same as a perfect closed hash and the same as an open hash with single entries in buckets. Once open hashes degenerate to have long chains, the compressed trie is substantially more time efficient. A trie is an O(1) lookup and insert in all cases and does not degenerate like hashes or trees do. Uncompressed tries are generally extremely less space efficient than hashes (making them quite bad for our purposes) but compressed tries substantially reduce wasted space with a trade-off of some time to do node-splits on insert and single-level mid-key lookups.

Note: In a FS look up, the search is not O(1) unless the FS is also implemented as a trie, so the benefit on disk will not be as fully realized, although if you can get the compression to fit into a directory inode block, the search speeds up significantly because of reduced I/Os. A FS existence search is O(n,m) where n is the maximum number of partitions in the trie's key, and m is the maximum number of files within a directory IF the key partitioning compression is not known - if it IS known, then the FS existence search is O(m). This n is different (much smaller) than the total number of entries in the hash. As an example, a sample repo has 2607 entries under object, and the key partitioned length (n) is always 2, representing, as an example, 00/16212c76018d27bd1a39630c32d1027fbfbebd. Keeping the key partitioning unknown allows the searcher to adapt to any repository trie compression, including existing ones, so there's that advantage.

-Randall




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux