Re: Changed path filter hash differs from murmur3 if char is signed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Taylor Blau <me@xxxxxxxxxxxx> writes:
> On Thu, May 11, 2023 at 03:51:16PM -0700, Junio C Hamano wrote:
> > If path filter hashing were merely advisory, in the sense that if a
> > matching data is found, great, the processing goes faster, but if
> > not, we would get correct results albeit not so quickly, a third
> > option would be to just update the implementation without updating
> > the version number.  But we may not be so lucky---you must have seen
> > a wrong result returned quickly, which is not what we want to see.
> 
> Right; from my understanding of Jonathan's message, I believe that we
> would get the wrong results if the implementation of not-quite-murmur3
> were corrected without updating the on-disk Bloom filters.

Yes - if the bloom filter contained junk data (in our example, created
using a different hash function on filenames that have characters that
exceed 0x7f), the bloom filter would report "no, this commit does not
contain a change in such-and-such path" and then we would skip the
commit, even if the commit did have a change in that path.

> We already have the bloom_filter_settings struct, which could also
> encode whether or not the data was computed with sign extension or not.
> If we are consistent in computing the hashes when we write Bloom filters
> and when we re-compute hashes to query them, I'd think we would be able
> to reuse the existing filters.
> 
> That would be nice to do if we could. Unfortunately, I don't think there
> is an easy way to determine the signed-ness of an existing set of Bloom
> filters, nor a guarantee that they all have the same signed-ness (IOW,
> the Bloom filters could have been built up across multiple copies of
> Git, each with different compiler settings).
> 
> So I'm not sure that that is a productive direction for us to take.

Agreed.

> > But if I recall correctly we made the file format in such a way that
> > bumping the version number is cheap in that transition can appear
> > seamless.  An updated implementation can just be told to _ignore_
> > old and possibly incorrect Bloom filters until it gets told to
> > recompute, at which time it can write a correct one with a new
> > version number.  So I would prefer your "Bump the version number and
> > ignore the old and possibly wrong data".
> 
> Version numbers are easy to increment, as is the case when adding new
> chunks to the chunked file format used by commit-graph,
> multi-pack-index, etc.
> 
> However, computing Bloom filters en-masse from scratch is fairly
> expensive. At GitHub when we were rolling out Bloom filters to all
> repositories a few years ago, I wrote 809e0327f5
> (builtin/commit-graph.c: introduce '--max-new-filters=<n>', 2020-09-18)
> to avoid spending too much time computing new filters from scratch.

Thanks for this pointer.

> If we do have to re-compute all of the filters from scratch, it may be
> worth considering enumerating all of the repository's paths first and
> seeing if any of them are >0xff. If none are, we can propagate the
> existing filters forward without having to compute new ones.

One way server operators can do this is by:
 - patching their Git to ignore the bloom filter version number
 - in a batch job, enumerate all trees and see their filenames
 - for the repos that have <=0x7f filenames, bump version number
 - update Git to the latest version (that uses the new hash)

Enumerating all trees saves on revision walking, which hopefully will
speed things up.

I don't have statistics on this, but if the majority of repos have
only <=0x7f filenames (which seems reasonable to me), this might save
sufficient work that we can proceed with bumping the version number and
ignoring old data.

> Better yet, we should be able to reuse existing Bloom filter data for
> paths that have all characters <=0xff, and only recompute them where
> necessary. That makes much more sense than the previous paragraph.

I would think that at the point that we know the paths that have been
changed for a commit, the cost of computation of the bloom filter (all
in the CPU) is negligible compared to the I/O it took for us to retrieve
all the paths (so the solution of just ignoring old data seems better
to me). But perhaps at large scales, we do want to save the computation
time anyway.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux