Re: git 2.36.0 regression: pre-commit hooks no longer have stdout/stderr as tty

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 20 2022, Emily Shaffer wrote:

[I'll reply to most of this & other questions in the form of patches,
just on some of this]

> On Wed, Apr 20, 2022 at 10:25 AM Junio C Hamano <gitster@xxxxxxxxx> wrote:
>>
>> Emily Shaffer <emilyshaffer@xxxxxxxxxx> writes:
>>
>> >> In the longer term, there are multiple possible action items.
>> >> ...
>> >>
>> >>  * We should teach hooks API to make it _optional_ to use the
>> >>    parallel subprocess API.  If we are not spawning hooks in
>> >>    parallel today, there is no reason to incur this regression by
>> >>    using the parallel subprocess API---this was a needress bug, and
>> >>    I am angry.
>> >
>> > To counter, I think that having hooks invoked via two different
>> > mechanisms depending on how many are provided or whether they are
>> > parallelized is a mess to debug and maintain. I still stand by the
>> > decision to use the parallel subprocess API, which I think was
>> > reasonable to expect to do the same thing when jobs=1, and I think we
>> > should continue to do so. It simplifies the hook code significantly.
>>
>> A simple code that does not behave as it should and causes end-user
>> regression is not a code worth defending.  Admitting it was a bad
>> move we made in the past is the first step to make it better.
>
> I am also sorry that this use case was broken. However, I don't see
> that it's documented in 'git help githooks' or elsewhere that we
> guarantee isatty() (or similar) of hooks matches that of the parent
> process. I think it is an accident that this worked before, and not
> something that was guaranteed by Git documentation - for example, we
> also do not have regression tests ensuring that behavior for hooks
> today, either, or else we would not be having this conversation. (If I
> simply missed the documentation promising that behavior, then I am
> sorry, and please point me to it.)

You're correct that it wasn't documented, and as regressions go that
makes it *slightly* better. I.e. at least it's not a publicly documented
promise.

Anyone using this part of the interface would have discovered it by
experimentation, or (reasonably) assumed that git was invoking the hook
without any special redirection or buffering.

And you're also right that we didn't have any test coverage for this,
actually before the t/t1800-hook.sh we didn't have any test coverage at
all on stdout_to_stderr for hooks (at least those converted to the API
so far), which is pretty fundimental.

But none of that (except perhaps the doc omission) makes this any less
of a regression. We don't have 100% test coverage, and can't assume that
just because something isn't documented or tested for that it's not
being relied on in the wild. It is, as this upthread report indicates.

In this case "100% test coverage" in the "make coverage" sense wouldn't
help, this is part of 200% test coverage. I.e. it's in how an external
user expects to use and interact with the command. So it can remain
uncovered even if our own tests touch 100% of our own code.

> [...]
>> > Hm. I was going to mention that Ævar and I discussed the possibility
>> > of setting an environment variable for hook child processes, telling
>>
>> That...
>>
>> > them which hook they are being run as - e.g.
>> > "GIT_HOOK=prepare-commit-msg" - but I suppose that relying on that
>> > alone doesn't tell us anything about whether the parent is being run
>> > in tty. I agree it could be very useful to simply pass
>> > GIT_PARENT_ISATTY to hooks (and I suppose other child processes).
>> > Could we simply do that from start_command() or something else deep in
>> > run-command.h machinery? Then Anthony's use case becomes
>> >
>> > if [-t 1|| GIT_PARENT_ISATTY]
>> >  ...
>> >
>> > and no need to examine Git version.

Just to clarify this a bit, we discussed passing down GIT_HOOK so that
you could e.g. symlink all your hooks and dispatch to some "hook
router".

Which right now you can do with the file-based hooks, because you'll
need to symlink them to such a router, but couldn't with future
config-based hooks.

IOW it's entirely separate conceptually from a "how does this hook
expect to behave" vis-a-vis calling isatty() or whatever. It would just
be working around or own implementation details, i.e. whether we invoke
a path or a configured command.

>> But DO NOT call it ISATTY.  "Are we showing the output to human
>> end-users" is the question it is answering to, and isatty() happens
>> to be an implementation detail on POSIXy system.
>>
>> "This" and "That" above make it smell like discussion was done, but
>> everybody got tired of discussing and the topic was shipped without
>> necessary polishment?  That sounds like a process failure, which we
>> may want to address in the new development cycle, not limited to this
>> particular topic.
>
> I think, rather, during discussion we said "without knowing how real
> users want to use hooks, it's not possible for us to make a good
> design for individual hooks to state whether they need to be
> parallelized or not." Perhaps that means this body of work should have
> stayed in 'next' longer, rather than making it to a release?
>
> For what it's worth, Google internally has been using multiple hooks
> via config for something like a year, with this design, from a
> combination of 'next' and pending hooks patches. But we haven't
> imagined the need to color hook output for users and check isatty() or
> similar. I think there are not many other consumers of 'next' besides
> the Google internal release. So I'm not sure that longer time in
> 'next' would have allowed us to see this issue, either.

We're both thoroughly "on inside" of this particular process failure, so
we're both bound to have biases here.

But having said that I agree with you here. I.e. as a mechanism for
mitigating mistakes and catching obscure edge cases just being more
careful or having things sit in 'next' for longer has, I think, proved
itself to not be an effective method (not just in this case, but a few
similar cases).

I'm not sure what the solution is exactly, but I'm pretty sure it
involves more controlled exposure to the wild (e.g. shipping certain
things as feature flags first), not deferring that exposure for long
periods, which is what having things sit it "next" for longer amounts
to.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux