On Thu, 2009-06-04 at 17:20 +0200, Frederic Weisbecker wrote: > Hi, > > > On Thu, May 28, 2009 at 01:46:33PM +0200, Jens Axboe wrote: > > Hi, > > > > Here's the 9th version of the writeback patches. Changes since v8: > I've just tested it on UP in a single disk. > > I've run two parallels dbench tests on two partitions and > tried it with this patch and without. I also tested V9 with multiple-dbench workload by starting multiple dbench tasks and every task has 4 processes to do I/O on one partition (file system). Mostly I use JBODs which have 7/11/13 disks. I didn't find result regression between vanilla and V9 kernel on this workload. > > I used 30 proc each during 600 secs. > > You can see the result in attachment. > And also there: > > http://kernel.org/pub/linux/kernel/people/frederic/dbench.pdf > http://kernel.org/pub/linux/kernel/people/frederic/bdi-writeback-hda1.log > http://kernel.org/pub/linux/kernel/people/frederic/bdi-writeback-hda3.log > http://kernel.org/pub/linux/kernel/people/frederic/pdflush-hda1.log > http://kernel.org/pub/linux/kernel/people/frederic/pdflush-hda3.log > > > As you can see, bdi writeback is faster than pdflush on hda1 and slower > on hda3. But, well that's not the point. > > What I can observe here is the difference on the standard deviation > for the rate between two parallel writers on a same device (but > two different partitions, then superblocks). > > With pdflush, the distributed rate is much better balanced than > with bdi writeback in a single device. > > I'm not sure why. Is there something in these patches that makes > several bdi flusher threads for a same bdi not well balanced > between them? > > Frederic. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html