Re: [PATCH 5/7] tmp-objdir: new API for creating and removing primary object dirs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 30, 2021 at 6:31 AM Ævar Arnfjörð Bjarmason
<avarab@xxxxxxxxx> wrote:
>
> On Thu, Sep 30 2021, Jeff King wrote:
>
> > On Tue, Sep 28, 2021 at 09:08:00PM -0700, Junio C Hamano wrote:
> >
> >> Jeff King <peff@xxxxxxxx> writes:
> >>
> >> >   Side note: The pretend_object_file() approach is actually even better,
> >> >   because we know the object is fake. So it does not confuse
> >> >   write_object_file()'s "do we already have this object" freshening
> >> >   check.
> >> >
> >> >   I suspect it could even be made faster than the tmp_objdir approach.
> >> >   From our perspective, these objects really are tempfiles. So we could
> >> >   write them as such, not worrying about things like fsyncing them,
> >> >   naming them into place, etc. We could just write them out, then mmap
> >> >   the results, and put the pointers into cached_objects (currently it
> >> >   insists on malloc-ing a copy of the input buffer, but that seems like
> >> >   an easy extension to add).
> >> >
> >> >   In fact, I think you could get away with just _one_ tempfile per
> >> >   merge. Open up one tempfile. Write out all of the objects you want to
> >> >   "store" into it in sequence, and record the lseek() offsets before and
> >> >   after for each object. Then mmap the whole result, and stuff the
> >> >   appropriate pointers (based on arithmetic with the offsets) into the
> >> >   cached_objects list.
> >>
> >> Cute.  The remerge diff code path creates a full tree that records
> >> the mechanical merge result.  By hooking into the lowest layer of
> >> write_object() interface, we'd serialize all objects in such a tree
> >> in the order they are computed (bottom up from the leaf level, I'd
> >> presume) into a single flat file ;-)
> >
> > I do still like this approach, but just two possible gotchas I was
> > thinking of:
> >
> >  - This side-steps all of our usual code for getting object data into
> >    memory. In general, I'd expect this content to not be too enormous,
> >    but it _could_ be if there are many / large blobs in the result. So
> >    we may end up with large maps. Probably not a big deal on modern
> >    64-bit systems. Maybe an issue on 32-bit systems, just because of
> >    virtual address space.
> >
> >    Likewise, we do support systems with NO_MMAP. They'd work here, but
> >    it would probably mean putting all that object data into the heap. I
> >    could live with that, given how rare such systems are these days, and
> >    that it only matters if you're using --remerge-diff with big blobs.
> >
> >  - I wonder to what degree --remerge-diff benefits from omitting writes
> >    for objects we already have. I.e., if you are writing out a whole
> >    tree representing the conflicted state, then you don't want to write
> >    all of the trees that aren't interesting. Hopefully the code is
> >    already figuring out which paths the merge even touched, and ignoring
> >    the rest. It probably benefits performance-wise from
> >    write_object_file() deciding to skip some object writes, as well
> >    (e.g., for resolutions which the final tree already took, as they'd
> >    be in the merge commit). The whole pretend-we-have-this-object thing
> >    may want to likewise make sure we don't write out objects that we
> >    already have in the real odb.
>
> I haven't benchmarked since my core.checkCollisions RFC patch[1]
> resulted in the somewhat related loose object cache patch from you, and
> not with something like the midx, but just a note that on some setups
> just writing things out is faster than exhaustively checking if we
> absolutely need to write things out.
>
> I also wonder how much if anything writing out the one file v.s. lots of
> loose objects is worthwhile on systems where we could write out those
> loose objects on a ramdisk, which is commonly available on e.g. Linux
> distros these days out of the box. If you care about performance but not
> about your transitory data using a ramdisk is generally much better than
> any other potential I/O optimization.
>
> Finally, and I don't mean to throw a monkey wrench into this whole
> discussion, so take this as a random musing: I wonder how much faster
> this thing could be on its second run if instead of avoiding writing to
> the store & cleaning up, it just wrote to the store, and then wrote

It'd be _much_ slower.  My first implementation in fact did that; it
just wrote objects to the store, left them there, and didn't bother to
do any auto-gcs.  It slowed down quite a bit as it ran.  Adding
auto-gc's during the run were really slow too. But stepping back,
gc'ing objects that I already knew were garbage seemed like a waste;
why not just prune them pre-emptively?  To do so, though, I'd have to
track all the individual objects I added to make sure I didn't prune
something else.  Following that idea and a few different attempts
eventually led me to the discovery of tmp_objdir.

In case it's not clear to you why just writing all the objects to the
normal store and leaving them there slows things down so much...

Let's say 1 in 10 merges had originally needed some kind of conflict
resolution (either in the outer merge or in the inner merge for the
virtual merge bases), meaning that 9 out of 10 merges traversed by
--remerge-diff don't write any objects.  Now for each merge for which
--remerge-diff does need to create conflicted blobs and new trees,
let's say it writes on average 3 blobs and 7 trees.  (I don't know the
real average numbers, it could well be ~5 total, but ~10 seems a
realistic first order approximation and it makes the math easy.)
Then, if we keep all objects we write, then `git log --remerge-diff`
on a history with 100,000 merge commits, will have added 100,000 loose
objects by the time it finishes.  That means that all diff and merge
operations slow down considerably as it runs due to all the extra
loose objects.

> another object keyed on the git version and any revision paramaters
> etc., and then pretty much just had to do a "git cat-file -p <that-obj>"
> to present the result to the user :)

So caching the full `git log ...` output, based on a hash of the
command line flags, and then merely re-showing it later?  And having
that output be invalidated as soon as any head advances?  Or are you
thinking of caching the output per-commit based on a hash of the other
command line flags...potentially slowing non-log operations down with
the huge number of loose objects?

> I suppose that would be throwing a lot more work at an eventual "git gc"
> than we ever do now, so maybe it's a bit crazy, but I think it might be
> an interesting direction in general to (ab)use either the primary or
> some secondary store in the .git dir as a semi-permanent cache of
> resolved queries from the likes of "git log".

If you do per-commit caching, and the user scrolls through enough
output (not hard to do, just searching the output for some string
often is enough), that "eventual" git-gc will be the very next git
operation.  If you cache the entire output, it'll be invalidated
pretty quickly.  So I don't see how this works.  Or am I
misunderstanding something you're suggesting here?




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux