On Mon, Oct 08, 2018 at 11:08:03PM -0400, Jeff King wrote: > I'd have done it as one fixed-size filter per commit. Then you should be > able to hash the path keys once, and apply the result as a bitwise query > to each individual commit (I'm assuming that it's constant-time to > access the filter for each, as an index into an mmap'd array, with the > offset coming from a commit-graph entry we'd be able to look up anyway). I used one big Bloom filter for the whole history, because that was the simplest way to get going, and because I was primarily interested in the potential benefits instead of the cost of generating and maintaining it. Using a 8MB filter for git.git results in a false positive rate between 0.21% - 0.53%. Splitting that up for ~53k commits we get ~160 bytes for each. On first sight that seems like rather small, but running a bit statistics shows that 99% of our commits don't change more than 10 files. One advantage of the "one Bloom filter for each commit" is that if a commit doesn't have a corresponding Bloom filter, then, well, we can't query the non-existing filter. OTOH, with one big Bloom filter we have to be careful to only ever query it with commits whose changes have already been added, otherwise we can get false negatives. > I think it would also be easier to deal with maintenance, since each > filter is independent (IIRC, you cannot delete from a bloom filter > without re-adding all of the other keys). Accumulating entries related to unreachable commits will eventually increase the false positive rate, but otherwise it won't cause false negatives, and won't increase the size of the Bloom filter or the time necessary to query it. So not deleting those entries right away is not an issue, and I think it could be postponed until bigger gc runs. [...] > But there's also a related question: how do we match pathspec patterns? > For a changed path like "foo/bar/baz", I imagine a bloom filter would > mark all of "foo", "foo/bar", and "foo/bar/baz". Indeed, that's what I did. > But what about "*.c"? I > don't think a bloom filter can answer that. Surely not, but it could easily return "maybe", and thus simply fall back to look at the diff. However, I've looked through the output of grep '^git log[^|]*[\[*?]' ~/.bash_history and haven't found a single case where I used Git's globbing. When I did use globbing, then I always used the shell's. Yeah, just one data point, and others surely use it differently, etc... but I think we should consider whether it's common enough to worry about and to increase complexity because of it. [...] > So let's imagine we'd store such a cache external to the regular object > data (i.e., as a commit-graph entry). The "log --raw" diff of linux.git > has 1.7M entries. The paths should easily compress to a single 32-bit > integer (e.g., as an index into a big path list). The oids are 20 bytes. > Add a few bytes for modes. That's about 80MB. Big, but not impossibly > so. Maybe pushing it for true gigantic repos, though. In my experiments with the Linux repo a 256MB Bloom filter has ~0.3% false positive rate, while a 128MB filter had 3-4%. Even bigger, though compared to the size of the checkout of the full kernel tree, arguably not that much. > Those numbers are ignoring merges, too. The meaning of "did this commit > touch that path" is a lot trickier for a merge commit, and I think may > depend on context. I'm not sure how even a bloom filter solution would > handle that (I was assuming we'd mostly punt and let merges fall back to > opening up the trees). During revision walking rev_compare_tree() checks whether the given paths changed between a commit and _one_ of its parents, and in case of merge commits it's invoked multiple times with the same commit but with different parent parameters. By storing (changed-path, parent-oid, commit-oid) tuples in the Bloom filter it can deal with merges, too.