On 11/23/2010 09:46 PM, Dave Chinner wrote:
...
I note that the load is
generating close to 10,000 iops on my test system, so it may very
well be triggering load related problems in your raid controller...
Dave thanks for all explanations on the BBWC,
I wanted to ask how can you measure that it's 10,000 IOPS with that
workload. Is it by iostat -x ?
If yes, what cell do you exactly look at and what is the period you use
for averaging values? I also can sometimes see values up to around 10000
in the cell "w/s" that corresponds to my MD RAID array (currently a 16
disk raid-5 with XFS delaylog), if I use
iostat -x 10 (this averages write IOPS on 10 seconds I think)
but only for a few shots of iostat, not for the whole run of the
"benchmark". Do you mean you have 10000 averaged over the whole benchmark?
Also I'm curious, do you remember how much time does it take to complete
one run (10 parallel tar unpacks) on your 12-disk raid0 + BBWC?
Probably a better test would excluding the unbzip2 part from the
benchmark, like the following but it probably won't make more than 10sec
difference:
/perftest/xfs# bzcat linux-2.6.37-rc2.tar.bz2 > linux-2.6.37-rc2.tar
/perftest/xfs# mkdir dir{1,2,3,4,5,6,7,8,9,10}
/perftest/xfs# for i in {1..10} ; do time tar -xf linux-2.6.37-rc2.tar
-C dir$i & done ; echo waiting now ; time wait; echo syncing now ; time
sync
Thanks for all explanations
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs