After seeing less much performance during pg_dump and pg_restore operations from a 10x15k SAN RAID1+1 XFS mount (allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime,nobarrier) than the local-storage 2x15k RAID1 EXT4 mount, I ran the following test of the effect of read-ahead (RA):
for t in `seq 1 1 10`
do
for drive in `ls /dev/sd[b-z]`
do
for ra in 256 512 `seq 1024 1024 70000`
do
echo benchmark-test: $drive $ra
blockdev --setra $ra $drive
hdparm -t $drive
hdparm -T $drive
echo benchmark-test-complete: $drive $ra
done
done
done
In this test, the local mount's buffered reads perform best around RA~10k @ 150MB/sec then starts a steady decline. The SAN mount has a similar but more subtle decline with a maximum around RA~5k @ 80MB/sec but with much greater variance. I was surprised at the 80MB/sec for the SAN - I was expecting 150MB/sec - and I'm also surprised at the variance. I understand that there are many more elements involved for the SAN: more drives, network overhead & latency, iscsi, etc. but I'm still surprised.
Is this expected behavior for a SAN mount or is this a hint at some misconfiguration? Thoughts?
Cheers,
Jan
Attachment:
readahead-sdc.png
Description: PNG image
Attachment:
readahead-sda.png
Description: PNG image
-- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance