[ ... ] >> lfs.303: >> 0: [0..4192255]: 36322376672..36326568927 [ ... ] >> lfs.3: >> 0: [0..1048575]: 2039336992..2040385567 $ factor 36322376672 2039336992 36322376672: 2 2 2 2 2 37 3257 9419 2039336992: 2 2 2 2 2 7 11 37 22369 $ factor 4192256 1048576 4192256: 2 2 2 2 2 2 2 2 2 2 2 23 89 1048576: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 The starting addresses look like to be 16KiB aligned (starting sector a multiple of 2^5 512B sectors). I would have expected different. The sizes are a multiple of 1MB, 2047MiB and 512MiB, which are plausible. [ ... ] > I'm not familiar with the btrace output, but here's the summary of roughly > 5 minutes: >> Total (8,16): >> Reads Queued: 16,914, 1,888MiB Writes Queued: 47,147, 1,438MiB >> Read Dispatches: 16,914, 1,888MiB Write Dispatches: 47,050, 1,438MiB >> Reads Requeued: 0 Writes Requeued: 0 >> Reads Completed: 16,914, 1,888MiB Writes Completed: 47,050, 1,438MiB >> Read Merges: 0, 0KiB Write Merges: 97, 592KiB >> IO unplugs: 17,060 Timer unplugs: 6 >> Throughput (R/W): 5,528KiB/s / 4,209KiB/s >> Events (8,16): 418,873 entries >> Skips: 0 forward (0 - 0.0%) That's around 17k reads, or 60/s, each of 100K, and 47k writes, or 160/s, average 31K. Both read and writes happen at around 4-5MB/s. Since the RAID5 is managed by the PERC, the reads cannot be those in RMW, and it is unlikely that these be sequential with the writes. There may be quite a bit of random access going on. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs