Re: [PATCH 5/7] tmp-objdir: new API for creating and removing primary object dirs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 30 2021, Jeff King wrote:

> On Tue, Sep 28, 2021 at 09:08:00PM -0700, Junio C Hamano wrote:
>
>> Jeff King <peff@xxxxxxxx> writes:
>> 
>> >   Side note: The pretend_object_file() approach is actually even better,
>> >   because we know the object is fake. So it does not confuse
>> >   write_object_file()'s "do we already have this object" freshening
>> >   check.
>> >
>> >   I suspect it could even be made faster than the tmp_objdir approach.
>> >   From our perspective, these objects really are tempfiles. So we could
>> >   write them as such, not worrying about things like fsyncing them,
>> >   naming them into place, etc. We could just write them out, then mmap
>> >   the results, and put the pointers into cached_objects (currently it
>> >   insists on malloc-ing a copy of the input buffer, but that seems like
>> >   an easy extension to add).
>> >
>> >   In fact, I think you could get away with just _one_ tempfile per
>> >   merge. Open up one tempfile. Write out all of the objects you want to
>> >   "store" into it in sequence, and record the lseek() offsets before and
>> >   after for each object. Then mmap the whole result, and stuff the
>> >   appropriate pointers (based on arithmetic with the offsets) into the
>> >   cached_objects list.
>> 
>> Cute.  The remerge diff code path creates a full tree that records
>> the mechanical merge result.  By hooking into the lowest layer of
>> write_object() interface, we'd serialize all objects in such a tree
>> in the order they are computed (bottom up from the leaf level, I'd
>> presume) into a single flat file ;-)
>
> I do still like this approach, but just two possible gotchas I was
> thinking of:
>
>  - This side-steps all of our usual code for getting object data into
>    memory. In general, I'd expect this content to not be too enormous,
>    but it _could_ be if there are many / large blobs in the result. So
>    we may end up with large maps. Probably not a big deal on modern
>    64-bit systems. Maybe an issue on 32-bit systems, just because of
>    virtual address space.
>
>    Likewise, we do support systems with NO_MMAP. They'd work here, but
>    it would probably mean putting all that object data into the heap. I
>    could live with that, given how rare such systems are these days, and
>    that it only matters if you're using --remerge-diff with big blobs.
>
>  - I wonder to what degree --remerge-diff benefits from omitting writes
>    for objects we already have. I.e., if you are writing out a whole
>    tree representing the conflicted state, then you don't want to write
>    all of the trees that aren't interesting. Hopefully the code is
>    already figuring out which paths the merge even touched, and ignoring
>    the rest. It probably benefits performance-wise from
>    write_object_file() deciding to skip some object writes, as well
>    (e.g., for resolutions which the final tree already took, as they'd
>    be in the merge commit). The whole pretend-we-have-this-object thing
>    may want to likewise make sure we don't write out objects that we
>    already have in the real odb.

I haven't benchmarked since my core.checkCollisions RFC patch[1]
resulted in the somewhat related loose object cache patch from you, and
not with something like the midx, but just a note that on some setups
just writing things out is faster than exhaustively checking if we
absolutely need to write things out.

I also wonder how much if anything writing out the one file v.s. lots of
loose objects is worthwhile on systems where we could write out those
loose objects on a ramdisk, which is commonly available on e.g. Linux
distros these days out of the box. If you care about performance but not
about your transitory data using a ramdisk is generally much better than
any other potential I/O optimization.

Finally, and I don't mean to throw a monkey wrench into this whole
discussion, so take this as a random musing: I wonder how much faster
this thing could be on its second run if instead of avoiding writing to
the store & cleaning up, it just wrote to the store, and then wrote
another object keyed on the git version and any revision paramaters
etc., and then pretty much just had to do a "git cat-file -p <that-obj>"
to present the result to the user :)

I suppose that would be throwing a lot more work at an eventual "git gc"
than we ever do now, so maybe it's a bit crazy, but I think it might be
an interesting direction in general to (ab)use either the primary or
some secondary store in the .git dir as a semi-permanent cache of
resolved queries from the likes of "git log".

1. https://lore.kernel.org/git/20181028225023.26427-5-avarab@xxxxxxxxx/



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux