Re: [PATCH v4 09/15] commit-graph: write Bloom filters to commit graph file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 06, 2020 at 04:59:49PM +0000, Garima Singh via GitGitGadget wrote:
> From: Garima Singh <garima.singh@xxxxxxxxxxxxx>
> 
> Update the technical documentation for commit-graph-format with
> the formats for the Bloom filter index (BIDX) and Bloom filter
> data (BDAT) chunks. Write the computed Bloom filters information
> to the commit graph file using this format.
> 
> Helped-by: Derrick Stolee <dstolee@xxxxxxxxxxxxx>
> Signed-off-by: Garima Singh <garima.singh@xxxxxxxxxxxxx>
> ---
>  .../technical/commit-graph-format.txt         |  30 +++++
>  commit-graph.c                                | 113 +++++++++++++++++-
>  commit-graph.h                                |   5 +
>  3 files changed, 147 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/technical/commit-graph-format.txt b/Documentation/technical/commit-graph-format.txt
> index a4f17441aed..de56f9f1efd 100644
> --- a/Documentation/technical/commit-graph-format.txt
> +++ b/Documentation/technical/commit-graph-format.txt
> @@ -17,6 +17,9 @@ metadata, including:
>  - The parents of the commit, stored using positional references within
>    the graph file.
>  
> +- The Bloom filter of the commit carrying the paths that were changed between
> +  the commit and its first parent, if requested.
> +
>  These positional references are stored as unsigned 32-bit integers
>  corresponding to the array position within the list of commit OIDs. Due
>  to some special constants we use to track parents, we can store at most
> @@ -93,6 +96,33 @@ CHUNK DATA:
>        positions for the parents until reaching a value with the most-significant
>        bit on. The other bits correspond to the position of the last parent.
>  
> +  Bloom Filter Index (ID: {'B', 'I', 'D', 'X'}) (N * 4 bytes) [Optional]
> +    * The ith entry, BIDX[i], stores the number of 8-byte word blocks in all

This is inconsistent with the implementation: according to the code in
one of the previous patches these entries are simple byte offsets, not
8-byte word offsets, i.e. the combined size of all modified path
Bloom filters can be at most 2^32 bytes.

The commit-graph file can contain information about at most 2^31-1
commits.  This means that with that many commits each commit can have
a merely 2 byte Bloom filter on average.  When using 7 hashes we'd
need 10 bits per path, so in two bytes we could store only a single
path.

Clearly, using 4 byte index entries significantly lowers the max
number of commits that can be stored with modified path Bloom filters.
IMO every new chunk must support at least 2^31-1 commits.

> +      Bloom filters from commit 0 to commit i (inclusive) in lexicographic
> +      order. The Bloom filter for the i-th commit spans from BIDX[i-1] to
> +      BIDX[i] (plus header length), where BIDX[-1] is 0.
> +    * The BIDX chunk is ignored if the BDAT chunk is not present.
> +
> +  Bloom Filter Data (ID: {'B', 'D', 'A', 'T'}) [Optional]
> +    * It starts with header consisting of three unsigned 32-bit integers:
> +      - Version of the hash algorithm being used. We currently only support
> +	value 1 which corresponds to the 32-bit version of the murmur3 hash
> +	implemented exactly as described in
> +	https://en.wikipedia.org/wiki/MurmurHash#Algorithm and the double
> +	hashing technique using seed values 0x293ae76f and 0x7e646e2 as
> +	described in https://doi.org/10.1007/978-3-540-30494-4_26 "Bloom Filters
> +	in Probabilistic Verification"

How should double hashing compute the k hashes, i.e. using 64 bit or
32 bit unsigned integer arithmetic?

I'm puzzled that you link to this paper and still use double hashing.

Two of the contributions of that paper are that it points out some
shortcomings of the double hashing scheme and provides a better
alternative in the form of enhanced double hashing, which can cut the
false positive rate in half.

However, that paper considers the hashing scheme only in the context
of one big Bloom filter.  I've found that when it comes to many small
Bloom filters then the k hashes produced by any double hashing variant
are not independent enough, and "standard" double hashing fares the
worst among them.  There are real repositories out there where double
hashing has over an order of magnitude higher average false positive
rate than enhanced double hashing.  Though that's not to say that
enhanced double hashing is good...

For details on these issues see

  https://public-inbox.org/git/20200529085038.26008-16-szeder.dev@xxxxxxxxx

> +      - The number of times a path is hashed and hence the number of bit positions
> +	      that cumulatively determine whether a file is present in the commit.
> +      - The minimum number of bits 'b' per entry in the Bloom filter. If the filter
> +	      contains 'n' entries, then the filter size is the minimum number of 64-bit
> +	      words that contain n*b bits.

Since the ideal number of bits per element depends only on the number
of hashes per path (k / ln(2) ≈ k * 10 / 7), why is this value stored
in the commit-graph?

> +    * The rest of the chunk is the concatenation of all the computed Bloom
> +      filters for the commits in lexicographic order.
> +    * Note: Commits with no changes or more than 512 changes have Bloom filters
> +      of length zero.

What does this "Note:" prefix mean in the file format specification?

Can an implementation use a one byte Bloom filter with no bits set for
a commit with no changes?  Can an implementation still store a Bloom
filter for commits that modify more than 512 paths?

> +    * The BDAT chunk is present if and only if BIDX is present.
> +
>    Base Graphs List (ID: {'B', 'A', 'S', 'E'}) [Optional]
>        This list of H-byte hashes describe a set of B commit-graph files that
>        form a commit-graph chain. The graph position for the ith commit in this
> diff --git a/commit-graph.c b/commit-graph.c
> index 732c81fa1b2..a8b6b5cca5d 100644
> --- a/commit-graph.c
> +++ b/commit-graph.c

> @@ -1034,6 +1071,59 @@ static void write_graph_chunk_extra_edges(struct hashfile *f,
>  	}
>  }
>  
> +static void write_graph_chunk_bloom_indexes(struct hashfile *f,
> +					    struct write_commit_graph_context *ctx)
> +{
> +	struct commit **list = ctx->commits.list;
> +	struct commit **last = ctx->commits.list + ctx->commits.nr;
> +	uint32_t cur_pos = 0;
> +	struct progress *progress = NULL;
> +	int i = 0;
> +
> +	if (ctx->report_progress)
> +		progress = start_delayed_progress(
> +			_("Writing changed paths Bloom filters index"),
> +			ctx->commits.nr);
> +
> +	while (list < last) {
> +		struct bloom_filter *filter = get_bloom_filter(ctx->r, *list);
> +		cur_pos += filter->len;

Given a sufficiently large number of commits with large enough Bloom
filters this will silently overflow.

> +		display_progress(progress, ++i);
> +		hashwrite_be32(f, cur_pos);
> +		list++;
> +	}
> +
> +	stop_progress(&progress);
> +}



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux