Re: Git in Outreachy December 2019?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 23, 2019 at 01:38:54PM -0700, Jonathan Tan wrote:

> I didn't have any concrete ideas so I didn't include those, but some
> unrefined ideas:

One risk to a mentoring project like this is that the intern does a good
job of steps 1-5, and then in step 6 we realize that the whole thing is
not useful, and upstream doesn't want it. Which isn't to say the intern
didn't learn something, and the project didn't benefit. Negative results
can be useful; but it can also be demoralizing.

I'm not arguing that's going to be the case here. But I do think it's
worth talking through these things a bit as part of thinking about
proposals.

>  - index-pack has the CLI option to specify a message to be written into
>    the .promisor file, but in my patch to write fetched refs to
>    .promisor [1], I ended up making fetch-pack.c write the information
>    because I didn't know how many refs were going to be written (and I
>    didn't want to bump into CLI argument length limits). If we had this
>    feature, I might have been able to pass a callback to index-pack that
>    writes the list of refs once we have the fd into .promisor,
>    eliminating some code duplication (but I haven't verified this).

That makes some sense. We could pass the data over a pipe, but obviously
stdin is already in use to receive the pack here. Ideally we'd be able
to pass multiple streams between the programs, but I think due to
Windows support, we can't assume that arbitrary pipe descriptors will
make it across the run-command boundary. So I think we'd be left with
communicating via temporary files (which really isn't the worst thing in
the world, but has its own complications).

>  - In your reply [2] to the above [1], you mentioned the possibility of
>    keeping a list of cutoff points. One way of doing this, as I state in
>    [3], is my original suggestion back in 2017 of one such
>    repository-wide list. If we do this, it would be better for
>    fetch-pack to handle this instead of index-pack, and it seems more
>    efficient to me to have index-pack be able to pass objects to
>    fetch-pack as they are inflated instead of fetch-pack rereading the
>    compressed forms on disk (but again, I haven't verified this).

And this is the flip-side problem: we need to get data back, but we have
only stdout, which is already in use (so we need some kind of protocol).
That leads to things like the horrible NUL-byte added by 83558686ce
(receive-pack: send keepalives during quiet periods, 2016-07-15).

> There are also the debuggability improvements of not having to deal with
> 2 processes.

I think it can sometimes be easier to debug with two separate processes,
because the input to index-pack is well-defined and can be repeated
without hitting the network (though you do have to figure out how to
record the network response, which can be non-trivial). I've also done
similar things for running performance simulations.

We'll still have the stand-alone index-pack command, so it can be used
for those cases. But as we add more features that utilize the in-process
interface, that may eventually stop being feasible.

> > [dropping unpack-objects]
> >     Maybe that would be worth making part of the project?
> 
> I'm reluctant to do so because I don't want to increase the scope too
> much - although if my project has relatively narrow scope for an
> Outreachy project, we can do so. As for eliminating the utility of
> having richer communication, I don't think so, because in the situations
> where we require richer communication (right now, situations to do with
> partial clone), we specifically run index-pack anyway.

Yeah, we're in kind of a weird situation there, where unpack-objects is
used less and less. I wonder how many surprises are lurking where
somebody reasoned about index-pack behavior, but unpack-objects may do
something slightly differently (I know this came up when we looked at
fsck-ing incoming objects for submodule vulnerabilities).

I kind of wonder if it would be reasonable to just always use index-pack
for the sake of simplicity, even if it never learns to actually unpack
objects. We've been doing that for years on the server side at GitHub
without ill effects (I think the unpack route is slightly more efficient
for a thin pack, but since it only kicks in when there are few objects
anyway, I wonder how big an advantage it is in general).

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux