On Fri, Aug 9, 2013 at 12:55 AM, Jan Kara <jack@xxxxxxx> wrote: > On Thu 08-08-13 15:58:39, Dave Hansen wrote: >> I was coincidentally tracking down what I thought was a scalability >> problem (turned out to be full disks :). I noticed, though, that ext4 >> is about 20% slower than ext2/3 at doing write page faults (x-axis is >> number of tasks): >> >> http://www.sr71.net/~dave/intel/page-fault-exts/cmp.html?1=ext3&2=ext4&hide=linear,threads,threads_idle,processes_idle&rollPeriod=5 >> >> The test case is: >> >> https://github.com/antonblanchard/will-it-scale/blob/master/tests/page_fault3.c > The reason is that ext2/ext3 do almost nothing in their write fault > handler - they are about as fast as it can get. ext4 OTOH needs to reserve > blocks for delayed allocation, setup buffers under a page etc. This is > necessary if you want to make sure that if data are written via mmap, they > also have space available on disk to be written to (ext2 / ext3 do not care > and will just drop the data on the floor if you happen to hit ENOSPC during > writeback). Out of curiosity, why does ext4 need to set up buffers? That is, as long as the fs can guarantee that there is reserved space to write out the page, why isn't it sufficient to just mark the page dirty and let the writeback code set up the buffers? > > I'm not saying ext4 write fault path cannot possibly be optimized (noone > seriously looked into that AFAIK so there may well be some low hanging > fruit) but it will always be slower than ext2/3. A more meaningful > comparison would be with filesystems like XFS which make similar guarantees > regarding data safety. FWIW, back when I actually tested this stuff, I had awful performance on XFS, btrfs, and ext4. But I'm really only interested in the whether IO (or waiting for contended locks) happens on faults or not -- a handful of microseconds while the fs allocates something from a slab doesn't really bother me. --Andy -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html