Re: [PATCH 5/7] tmp-objdir: new API for creating and removing primary object dirs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 30, 2021 at 12:33 AM Jeff King <peff@xxxxxxxx> wrote:
>
> On Tue, Sep 28, 2021 at 09:08:00PM -0700, Junio C Hamano wrote:
>
> > Jeff King <peff@xxxxxxxx> writes:
> >
> > >   Side note: The pretend_object_file() approach is actually even better,
> > >   because we know the object is fake. So it does not confuse
> > >   write_object_file()'s "do we already have this object" freshening
> > >   check.
> > >
> > >   I suspect it could even be made faster than the tmp_objdir approach.
> > >   From our perspective, these objects really are tempfiles. So we could
> > >   write them as such, not worrying about things like fsyncing them,
> > >   naming them into place, etc. We could just write them out, then mmap
> > >   the results, and put the pointers into cached_objects (currently it
> > >   insists on malloc-ing a copy of the input buffer, but that seems like
> > >   an easy extension to add).
> > >
> > >   In fact, I think you could get away with just _one_ tempfile per
> > >   merge. Open up one tempfile. Write out all of the objects you want to
> > >   "store" into it in sequence, and record the lseek() offsets before and
> > >   after for each object. Then mmap the whole result, and stuff the
> > >   appropriate pointers (based on arithmetic with the offsets) into the
> > >   cached_objects list.
> >
> > Cute.  The remerge diff code path creates a full tree that records
> > the mechanical merge result.  By hooking into the lowest layer of
> > write_object() interface, we'd serialize all objects in such a tree
> > in the order they are computed (bottom up from the leaf level, I'd
> > presume) into a single flat file ;-)
>
> I do still like this approach, but just two possible gotchas I was
> thinking of:
>
>  - This side-steps all of our usual code for getting object data into
>    memory. In general, I'd expect this content to not be too enormous,
>    but it _could_ be if there are many / large blobs in the result. So
>    we may end up with large maps. Probably not a big deal on modern
>    64-bit systems. Maybe an issue on 32-bit systems, just because of
>    virtual address space.
>
>    Likewise, we do support systems with NO_MMAP. They'd work here, but
>    it would probably mean putting all that object data into the heap. I
>    could live with that, given how rare such systems are these days, and
>    that it only matters if you're using --remerge-diff with big blobs.

Um, I'm starting to get uncomfortable with this pretend_object stuff.
Part of the reason that merge-ort isn't truly "in memory" despite
attempting to do exactly that, was because for large enough repos with
enough files modified on both sides, I wasn't comfortable assuming
that all new files from three-way content merges and all new trees fit
into memory.  I'm sure we'd be fine with current-day linux kernel
sized repos.  No big deal.  In fact, most merges probably don't add
more than a few dozen new files.  But for microsoft-sized repos, and
with repos tending to grow over time, more so when the tools
themselves scale nicely (which we've all been working on enabling),
makes me worry there might be enough new objects within a single merge
(especially given the recursive inner merges) that we might need to
worry about this.

>  - I wonder to what degree --remerge-diff benefits from omitting writes
>    for objects we already have. I.e., if you are writing out a whole
>    tree representing the conflicted state, then you don't want to write
>    all of the trees that aren't interesting. Hopefully the code is
>    already figuring out which paths the merge even touched, and ignoring
>    the rest.

Not only do you want to avoid writing all of the trees that aren't
interesting, you also want to avoid traversing into them in the first
place and avoid doing trivial file merges for each entry underneath.
Sadly, merge-recursive did anyway.  Because renames and directory
renames can sometimes throw a big wrench in the desire to avoid
traversing into such directories.  Figuring out how to avoid it was
kind of tricky, but merge-ort definitely handles this when it can
safely do so; see the cover letter for my trivial directory resolution
series[1].

[1] https://lore.kernel.org/git/pull.988.v4.git.1626841444.gitgitgadget@xxxxxxxxx/

>    It probably benefits performance-wise from
>    write_object_file() deciding to skip some object writes, as well
>    (e.g., for resolutions which the final tree already took, as they'd
>    be in the merge commit).

Yes, it will also benefit from that, but the much bigger side is
avoiding needlessly recursing into directories unchanged on (at least)
one side.

>    The whole pretend-we-have-this-object thing
>    may want to likewise make sure we don't write out objects that we
>    already have in the real odb.

Right, so I'd have to copy the relevant logic from write_object_file()
-- I think that means instead of write_object_file()'s calls to
freshen_packed_object() and freshen_loose_object() that I instead call
find_pack_entry() and make has_loose_object() in object-file.c not be
static and then call it.  Does that sound right?

Of course, that's assuming we're okay with this pretend_object thing,
which I'm starting to worry about.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux