On Tue, Jun 05, 2018 at 01:18:08AM +0300, Max Kirillov wrote: > > On Sun, Jun 03, 2018 at 12:27:49AM +0300, Max Kirillov wrote: > > Since this is slightly less efficient, and because it only matters if > > the web server does not already close the pipe, should this have a > > run-time configuration knob, even if it defaults to > > safe-but-slightly-slower? > > Personally, I of course don't want this. Also, I don't think > the difference is much noticeable. But you can never be sure > without trying. I'll try to measure some numbers. I don't know if it will matter or not. I just wonder if we want to leave an escape hatch for people who might. I could take or leave it. > Actually, it is already 3rd same error in this file. Maybe > deserve some refactoring. I will change the message also. Thanks, that kind of related cleanup is very welcome. > > We generally prefer to have all commands, even ones we don't expect to > > fail, inside test_expect blocks (e.g., with a "setup" description). > > Will the defined variables get to the next test? I'll try to > do as you describe. Yes, the tests are all run as evals. So as long as you don't open a subshell yourself, any changes you make to process state will persist. > >> +test_expect_success 'fetch plain truncated' ' > >> + test_http_env upload \ > >> + "$TEST_DIRECTORY"/t5562/invoke-with-content-length.pl fetch_body.trunc git http-backend >act.out 2>act.err && > >> + test_must_fail verify_http_result "200 OK" > >> +' > > > > Usually test_must_fail on a checking function like this is a sign that > > the check is not as robust as we'd like. If the function checks two > > things "A && B", then checking test_must_fail will only let us know > > "!A || !B", but you probably want to check both. > > Well here I just want to know that the request has failed, > and we already know that it can fail in different ways, > but the test is not going to differentiate those ways. OK, looking over your verify_http_result function, I _think_ we are OK here, because the only && is against a printf, which we wouldn't really expect to fail. > >> +sleep 1; # is interrupted by SIGCHLD > >> +if (!$exited) { > >> + close($out); > >> + die "Command did not exit after reading whole body"; > >> +} > > > Also, do we need to protect ourselves against other signals being > > delivered? E.g., if I resize my xterm and this process gets SIGWINCH, is > > it going to erroneously end the sleep and say "nope, no exited signal"? > > I'll check, but what could I do? Should I add blocking other > signals there? I think a more robust check may be to waitpid() on the child for up to N seconds. Something like this: $SIG{ALRM} = sub { kill(9, $pid); die "command did not exit after reading whole body" }; alarm(60); waitpid($pid, 0); alarm(0); That should exit immediately if $pid does, and otherwise die after exactly 60 seconds. Perl's waitpid implementation will restart automatically if it gets another signal. -Peff