Stefan Beller <sbeller@xxxxxxxxxx> writes: >>> + while (1) { >>> + ssize_t len = xread(cp->err, buf, sizeof(buf)); >>> + if (len < 0) >>> + die("Read from child failed"); >>> + else if (len == 0) >>> + break; >>> + else { >>> + strbuf_add(&out, buf, len); >>> + } >> >> ... and the whole thing is accumulated in core??? > > The pipes have a limit, so we need to empty them to prevent back-pressure? Of course. But that does not lead to "we hold everything in core". This side could choose to emit (under protection of args->mutex) early, e.g. after reading a line, emit it to our standard output (or our standard error). > And because we want to have the output of one task at a time, we need to > save it up until we can put out the whole output, no? I do not necessarily agree, and I think I said that already: http://thread.gmane.org/gmane.comp.version-control.git/276273/focus=276321 >>> + } >>> + if (finish_command(cp)) >>> + die("command died with error"); >>> + >>> + sem_wait(args->mutex); >>> + fputs(out.buf, stderr); >>> + sem_post(args->mutex); >> >> ... and emitted to standard error? >> >> I would have expected that the standard error would be left alone > > `git fetch` which may be a good candidate for such an operation > provides progress on stderr, and we don't want to intermingle > 2 different submodule fetch progress displays > ("I need to work offline for a bit, so let me get all of the latest stuff, > so I'll run `git submodule foreach -j 16 -- git fetch --all" though ideally > we want to have `git fetch --recurse-submodules -j16` instead ) > >> (i.e. letting warnings from multiple jobs to be mixed together >> simply because everybody writes to the same file descriptor), while >> the standard output would be line-buffered, perhaps captured by the >> above loop and then emitted under mutex, or something. > >> >> I think I said this earlier, but latency to the first output counts > > "to the first stderr" > in this case? I didn't mean "output==the standard output stream". As I said in $gmane/276321, an early output, as an indication that we are doing something, is important. > Why would we want to unplug the task queue from somewhere else? When you have a dispatcher more intelligent than a stupid FIFO, I would imagine that you would want to be able to do this pattern, especially when coming up with a task (not performing a task) takes non-trivial amount of work: prepare task queue and have N threads waiting on it; plug the queue, i.e. tell threads that do not start picking tasks out of it yet; large enough loop to fill the queue to a reasonable size while keeping the threads waiting; unplug the queue. Now the threads can pick tasks from the queue, but they have many to choose from, and a dispatcher can do better than simple FIFO can take advantage of it; keep filling the queue with more tasks, if necessary; and finally, wait for everything to finish. Without "plug/unplug" interface, you _could_ do the above by doing something stupid like prepare a task queue and have N threads waiting on it; loop to find enough number of tasks but do not put them to task queue, as FIFO will eat them one-by-one; instead hold onto them in a custom data structure that is outside the task queue system; tight and quick loop to move them to the task queue; keep finding more tasks and feed them to the task queue; and finally, wait for everything to finish. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html