Yeah, as I understand it you should have 6000 IOPS available for the md
device (ideally).
The iostats you display certainly look benign... but the key time to be
sampling would be when you see the lock list explode - could look very
different then.
Re vm.dirty* - I would crank the values down by a factor of 5:
vm.dirty_background_ratio = 1 (down from 5)
vm.dirty_ratio = 2 (down from 10)
Assuming of course that you actually are seeing an IO stall (which
should be catchable via iostat or iotop)... and not some other issue.
Otherwise leave 'em alone and keep looking :-)
Cheers
Mark
On 02/04/13 13:31, Armand du Plessis wrote:
I had a look at the iostat output (on a 5s interval) and pasted it
below. The utilization and waits seems low. Included a sample below, #1
taken during normal operation and then when the locks happen it
basically drops to 0 across the board. My (mis)understanding of the IOPS
was that it would be 1000 IOPS per/volume and when in RAID0 should give
me quite a bit higher throughput than in a single EBS volume setup. (My
naive envelop calculation was #volumes * PIOPS = Effective IOPS :/)
I'm looking into vm.dirty_background_ratio, vm.dirty_ratio sysctls. Is
there any guidance or links available that would be useful as a starting
point?
Thanks again for the help, I really appreciate it.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance