On Monday 11 December 2006 19:59, Marco Costalba wrote: > On 12/11/06, Linus Torvalds <torvalds@xxxxxxxx> wrote: > > > > > > > > However, you seem to continually ignore the thing I've asked you to do > > several times: try with a cold-cache situation. > > Yes. I will test it. I was testing with warm cache also to my > curiosity about comparing pipes with temporary file reading as data > exchange facility. So I needed to avoid HD artifacts. Hi, I just looked at the QProcess implementation in Qt 3.3.6, as I was curious. Qt does a big select() call in the event loop. If there is data available from the child process, it is reading in chunks of a maximum of 4096 bytes, with a select() call inbetween to see if there is still data available. After every read, the read data is concatenated into the read buffer. For the slow/cold cache case, this probably is the best *if* the consumer application can act as fast as possible when data is sent. Which makes it a good fit to avoid --topo-order and do some redrawing of the graph yourself if needed. For sure, this gives the fastest visual appeareance. You could start filling the list after e.g. 30 revs are read. Obviously, there is some possibility for improvement _when_ you know that you want to read in large amounts of data in big chunks, given that QProcess uses at least two system calls every 4 kB. A general question: How many context switches are involved in such a producer/consumer scenario, given that the consumer writes one line at a time, and the consumer uses poll/select to wait for the data? Is there some possibility to make the kernel write-combine single small producer writes into bigger chunks, which will be delivered at once (or smaller data only after a small timeout)? Josef - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html