Re: [PATCH 00/15] Refactor chunk-format into an API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/9/2020 7:50 PM, Taylor Blau wrote:
> On Wed, Dec 09, 2020 at 06:13:18PM +0100, René Scharfe wrote:
>> I'm not too familiar with the chunk producers and consumers, so I can
>> only offer some high-level observations.  And I don't have to use the
>> API, so go wild! ;)  I was just triggered by the appearance of two
>> working pieces of code being replaced by two slightly different pieces
>> of code plus a third one on top.
> 
> :-).
> 
>>> With regards to the "YAGNI" comment... I do have thoughts about
>>> extending the reachability bitmap format to use chunks (of course, this
>>> would break compatibility with JGit, and it isn't something that I plan
>>> to do in the short-term, or even necessarily in the future).
>>>
>>> In any event, I'm sure that this won't be these two won't be the last
>>> chunk-based formats that we have in Git.
>>
>> OK, so perhaps we can do better before this scheme is copied.  The write
>> side is complicated by the fact that the table of contents (TOC) is
>> written first, followed by the actual chunks.  That requires two passes
>> over the data.
> 
> "Two passes" meaning that we have to both compute the size of and then
> write the data? This is relatively cheap to do, at least so I think.
> 
> For e.g., the OIDLOOKUP commit-graph chunk is just the_hash_algo->hashsz
> * commits->nr bytes wide, so that can be done in constant time. A more
> heavyweight case might be for e.g., the Bloom data section, where Bloom
> filters have to first be computed, their lengths accounted for, and
> _then_ written when we eventually get to writing that chunk.
> 
> This happens in compute_bloom_filters(); and write_chunk_bloom_indexes()
> + write_chunk_bloom_data(), respectively. Those Bloom filters are all
> stored in a commit slab until they are written, so these "two passes"
> are just paid for in memory.

The current design of the format (TOC first) does require that we can
predict the chunk sizes before we start writing the file. But also this
has _some_ desirable properties for the writer. Specifically, we keep
the file write handle for a smaller amount of time. How valuable is that?
:shrug:

>> The ZIP format solved a similar issue by placing the TOC at the end,
>> which allows for one-pass streaming.  Another way to achieve that would
>> be to put the TOC in a separate file, like we do for .pack and .idx
>> files.  This way you could have a single write function for chunks, and
>> writers would just be a single sequence of calls for the different
>> types.
> 
> Interesting. I'm not opposed to changing any of these formats (and maybe
> there is some compelling argument for doing so, I am not sure) but I
> think that unifying the implementation for reading / writing the chunk
> format _before_ changing it is a postive step.

Changing the TOC location would require a version increment in the file
formats, which is a bit painful.

I understand why the ZIP format does this, it is trying to stream data
through a compression algorithm and cannot store the result in memory
before writing. Perhaps we would want to consider that as a way to
reduce the memory load for things like changed-path Bloom filters, but
let's wait for that to be an actually noticeable problem before making
the change.

The TOC location is definitely optimized for readers, who are already
reading the initial header to discover some info about the file.

>> But seeing that the read side just loads all of the chunks anyway
>> (skipping unknown IDs) I wonder why we need a TOC at all.  That would
>> only be useful if callers were trying to read just some small subset
>> of the whole file.  A collection of chunks for easy dumping and loading
>> could be serialized by writing just a small header for each chunk
>> containing its type and size followed by its payload.
> 
> AFAIK, we do use the table of contents to locate where the chunks are so
> that we can for e.g., set up the commit_graph structure's pointers to
> point at each chunk appropriately.

The chunk-based format is really optimized for _structured data_ where
these sizes are mostly predictable. The chunk sizes that are not
predictable are things like the "extra edges" or changed-path Bloom
filter data, but that data is indexed by a structured chunk.

A natural fit for the chunk-based format is the reachability bitmap
format, since the current format requires a scan to discover which
commits have bitmaps. If we used chunks, then we could quickly
search a lookup table for the commits that have bitmaps, then navigate
to the binary data chunk for the bitmap itself.

Thanks,
-Stolee



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux