> My scripts, which read stdout from ssh, weren't seeing EOF from the
> remote session.? It was being sent, but lost.? I tracked it down to the
> following code, in ssh.c, at ssh_session2_open:
>
> ??????? if (stdin_null_flag) {
> ??????????????? in = open(_PATH_DEVNULL, O_RDONLY);
> ??????? } else {
> ??????????????? in = dup(STDIN_FILENO);
> ??????? }
> ??????? out = dup(STDOUT_FILENO);
> ??????? err = dup(STDERR_FILENO);
>
> The remote session did close stdout.? The sshd from which it was spawned
> signaled to close stdout.? The ssh program received that signal and
> closed, well, something, but not stdout.? It closed a copy.?
> Importantly, it left a copy open, so my program got no EOF.
>
> Why not:
>
> ??????? if (stdin_null_flag) {
> ??????????????? in = open(_PATH_DEVNULL, O_RDONLY);
> ??????? } else {
> ??????????????? in = STDIN_FILENO;
> ??????? }
> ??????? out = STDOUT_FILENO;
> ??????? err = STDERR_FILENO;
>
> If not that, how is a program that reads from ssh's output ever going to
> see EOF?
I'm not sure if the current behavior is the best, but it's pretty clear
that the reason for it is that the usual way to signal "end" on stdout
is to terminate the process. So ssh waits for the process it is running
on the server end to terminate, then it terminates, and then whatever is
piped from ssh sees EOF.
What you're proposing is if the server-end program deliberately closes
stdout before it terminates, should ssh then close stdout before *it*
terminates?
This could be especially interesting if the remote program hasn't closed
stderr yet -- the next program in the pipe would expect, having seen EOF
on stdin that it could depend on everything ssh was to do (including
perhaps saving its stderr) to be complete already, but that might not be so.
Dale
Dale
_______________________________________________
openssh-unix-dev mailing list
openssh-unix-dev@xxxxxxxxxxx
https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev