Merlin/Luke:
> in theory, with 10 10k disks in raid 10, you should be able to keep > your 2fc link saturated all the time unless your i/o is extremely > random. random i/o is the wild card here, ideally you should see at > least 2000 seeks in bonnie...lets see what comes up.
I suspect the problem here is the sequential I/O rate - let's wait and see what the dd test results look like.
Here are the tests that you suggested that I do, on both the local disks (WAL) and the SAN (tablespace). The random seeks seem to be far below what Merlin said was "good", so I am a bit concerned. There is a bit of other activity on the box at the moment which is hard to stop, so that might have had an impact on the processing. Here is the bonnie++ output: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP Local Disks 31G 45119 85 56548 21 27527 8 35069 66 86506 13 499.6 1 SAN 31G 53544 98 93385 35 18266 5 24970 47 57911 8 611.8 1 And here are the dd results for 16GB RAM, i.e. 4,000,000 8K blocks: # Local Disks $ time bash -c "dd if=/dev/zero of=/home/myhome/bigfile bs=8k count=4000000 && sync" 4000000+0 records in 4000000+0 records out real 10m0.382s user 0m1.117s sys 2m45.681s $ time dd if=/home/myhome/bigfile of=/dev/null bs=8k count=4000000 4000000+0 records in 4000000+0 records out real 6m22.904s user 0m0.717s sys 0m53.766s # Fibre Channel SAN $ time bash -c "dd if=/dev/zero of=/data/test/bigfile bs=8k count=4000000 && sync" 4000000+0 records in 4000000+0 records out real 5m58.846s user 0m1.096s sys 2m18.026s $ time dd if=/data/test/bigfile of=/dev/null bs=8k count=4000000 4000000+0 records in 4000000+0 records out real 14m9.560s user 0m0.739s sys 0m53.806s