On 08/05/2011 09:00 AM, Kevin Grittner wrote:
optimal pool size = ((2 * actual core count) + effective spindle
count)
How does that work? If your database fits in memory, your optimal TPS is
only constrained by CPU. Any fetches from disk reduce your throughput
from IO Waits. How do you account for SSDs/PCIe cards which act as an
effective spindle multiplier?
I've seen Java apps that, through the use of several systems using Java
Hibernate pool sharing, are not compatible with connection poolers such
as PGBouncer. As such, they had 50x CPU count and still managed
12,000TPS because everything in use was cached. Throw a disk seek or two
in there, and it drops down to 2000 or less. Throw in a PCIe card, and
pure streams of "disk" reads remain at 12,000TPS.
It just seems a little counter-intuitive. I totally agree that it's not
optimal to have connections higher than effective threads, but *adding*
spindles? I'd be more inclined to believe this:
optimal pool size = 3*cores - cores/spindles
Then, as your spindles increase, you're subtracting less and less until
you reach optimal 3x.
One disk, but on a 4-cpu system?
12 - 4 = 8. So you'd have the classic 2x cores.
On a RAID 1+0 with 4 disk pairs (still 4 cpu)?
12 - 1 = 11.
On a giant SAN with couple dozen disks or a PCIe card that tests an
order of magnitude faster than a 6-disk RAID?
12 - [small fraction] = 12
It still fits your 3x rule, but seems to actually account for the fact
disks suck. :p
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
sthomas@xxxxxxxxx
______________________________________________
See http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance