On Sat, Jun 16, 2007 at 11:31:47AM +0200, Florian D. wrote: > Chris Mason wrote: > > > Strange, these numbers are not quite what I was expecting ;) Could you > > please post your fio job files? Also, how much ram does the machine > > have? Only writing doesn't seem like enough to fill the ram. > > > > -chris > > > > > > Sure: > > [global] > > directory=/mnt/temp/default > > filename=testfile > > size=300m > > randrepeat=1 > > overwrite=1 > > end_fsync=1 [ very bad results on btrfs with these parameters ] Ok, the numbers make more sense now. Basically what is happening is that during the random IO phase, fio is hitting every single block in the file. Btrfs will allocate new blocks in a sequential fashion, but the fsync does writeback in page order. So, the fsync sees completely random block ordering, and then we see it again on the reads. In ext3 even though the writes are random, the fsync uses the original (sequential) ordering of the blocks, and everything works nicely. The fix is either delayed allocation or defrag-on-writeback. Another option (which I'll have to do for O_SYNC performance) is to leave space in the blocks allocated to the file for COWs (basically strides of allocated blocks). I'll do the defrag-on-writeback right after enospc. -chris - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html