Hi Scott, On 5/12/09, Scott Carey <scott@xxxxxxxxxxxxxxxxx> wrote: > Although nobody wants to support it, he should try the patch that Jignesh K. > Shah (from Sun) proposed that makes ProcArrayLock lighter-weight. If it > makes 32 cores much faster, then we have a smoking gun. > > Although everyone here is talking about this as an 'unoptimal' solution, the > fact is there is no evidence that a connection pooler will fix the > scalability from 16 > 32 cores. > Certainly a connection pooler will help most results, but it may not fix the > scalability problem. > > A question for Dimitri: > What is the scalability from 16 > 32 cores at the 'peak' load that occurs > near 2x the CPU count? Is it also poor? If this is also poor, IMO the > community here should not be complaining about this unopimal case -- a > connection pooler at that stage does little and prepared statements will > increase throughput but not likely alter scalability. I'm attaching a small graph showing a TPS level on PG 8.4 depending on number of cores (X-axis is a number of concurrent users, Y-axis is the TPS number). As you may see TPS increase is near linear while moving from 8 to 16 cores, while on 32cores even it's growing slightly differently, what is unclear is why TPS level is staying limited to 11.000 TPS on 32cores. And it's pure read-only workload. > > If that result scales, then the short term answer is a connection pooler. > > In the tests that Jingesh ran -- making the ProcArrayLock faster helped the > case where connections = 2x the CPU core count quite a bit. > > The thread about the CPU scalability is "Proposal of tunable fix for > scalability of 8.4", originally posted by "Jignesh K. Shah" > <J.K.Shah@xxxxxxx>, March 11 2009. > > It would be very useful to see results of this benchmark with: > 1. A Connection Pooler will not help, as each client is *not* disconnecting/reconnecting during the test, as well PG is keeping well even 256 users. And TPS limit is reached already on 64 users, don't think pooler will help here. > 2. Jignesh's patch I've already tested it and it did not help in my case because the real problem is elsewhere.. (however, I did not test it yet with my latest config params) > 3. Prepared statements > yes, I'm preparing this test. > #3 is important, because prepared statements are ideal for queries that > perform well with low statistics_targets, and not ideal for those that > require high statistics targets. Realistically, an app won't have more than > a couple dozen statement forms to prepare. Setting the default statistics > target to 5 is just a way to make some other query perform bad. Agree, but as you may have a different statistic target *per* table it should not be a problem. What is sure - all time spent on parse and planner will be removed here, and the final time should be a pure execution. Rgds, -Dimitri > > > On 5/12/09 10:53 AM, "Alvaro Herrera" <alvherre@xxxxxxxxxxxxxxxxx> wrote: > >> Andres Freund escribió: >> >>> Naturally it would still be nice to be good in this not optimal >>> workload... >> >> I find it hard to justify wasting our scarce development resources into >> optimizing such a contrived workload. >> >> -- >> Alvaro Herrera >> http://www.CommandPrompt.com/ >> The PostgreSQL Company - Command Prompt, Inc. >> >> -- >> Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) >> To make changes to your subscription: >> http://www.postgresql.org/mailpref/pgsql-performance >> > >
Attachment:
Hist_coresALL_RW0.ccrnone.pgsql-8.4beta1-buf4096-tps_avg-1.gif
Description: GIF image
-- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance