On Tue, Nov 23, 2010 at 04:14:16PM -0600, Stan Hoeppner wrote: > Dave Chinner put forth on 11/23/2010 2:46 PM: > > > I've been unable to reproduce the problem with your test case (been > > running over night) on a 12-disk, 16TB dm RAID0 array, but I'll keep > > trying to reproduce it for a while. I note that the load is > > generating close to 10,000 iops on my test system, so it may very > > well be triggering load related problems in your raid controller... > > Somewhat off topic, but how are you generating 10,000 IOPS by carving a > 16TB LUN/volume from 12 x 2TB SATA disk spindles? Such drives aren't > even capable of 200 seeks per second. Even if they were you'd top out > at less than 2,500 IOPS (random). 16TB/12=1.33 TB per disk. No such > capacity disk exists. So I assume you're using 12 x 2TB disks and > slicing/dicing out 16TB. What am I missing Dave? 512MB of BBWC backing the disks. The BBWC does a much better job of reordering out-of-order writes than the Linux elevators because 512MB is a much bigger window than a couple of thousand 4k IOs. Hence metadata/small file intensive workloads go much faster than you'd expect from just looking at the IO patterns and the capability of the disks. IOWs, for write workloads that are not purely random, the disk subsystem behaves more like an SSD than a RAID0 array of spinning rust... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs