On Fri, Dec 07, 2012 at 06:39:49PM -0700, Chris Mason wrote: > On Fri, Dec 07, 2012 at 05:17:05PM -0700, Dave Chinner wrote: > > On Fri, Dec 07, 2012 at 02:03:06PM -0500, Chris Mason wrote: > > > On a single flash drive doing random 4K writes, xfs does 950MB/s into > > > regular extents but only 400MB/s into preallocated extents. > > > > > > http://masoncoding.com/presentation/perf-linuxcon12/fallocate.png > > > > This is bordering on irrelevancy, but can you provide the workload > > you were running to generate this graph? Random 4k writes could be > > anything, really. > > This one was fio aio/dio, I'll dig out the job file and rerun it on > 3.7-rc on Monday. Any real random write is going to show this with > enough load. Ok, I ran this against 3.6. Since my box has two iodrives in it now, I tossed them into lvm and ran striped over both. A single drive is iop bound at 1GB/s, and we're able to push 2GB/s over both. LVM slows it down slightly, and if you let the runs go long enough, you can see the little log structured squirrels jumping in from time to time. Long story short, on the lvm block device we average about 1.7GB/s over the two drives. This is iop bound, the two cards can push about 2.6GB/s doing streaming writes. XFS without preallocation comes very close to the iops bound number. This is really impressive, but it also means every additional IO to track the preallocation is going to hurt the bottom line. With preallocation on, the speed is the same with one drive as with two. Eric had asked me to do a run with holes, and they come out a little worse than preallocated. Graphs: http://masoncoding.com/mason/benchmark/xfs-fallocate/xfs-random-write-compare.png The fio job is in that xfs-fallocate directory, and included below. -chris [global] bs=4k direct=1 ioengine=aio size=12g rw=randwrite norandommap runtime=30 iodepth=1024 # set overwrite=1 to force us to fully overwrite # the preallocated files before the random IO starts # #overwrite=1 # set fallocate=none to ues sparse files #fallocate=none # run 4 jobs where each job is operating on # only one file. This way there's no lock contention # on the file itself. # [f1] filename=/mnt/f1 [f1] filename=/mnt/f2 [f1] filename=/mnt/f3 [f1] filename=/mnt/f4 -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html