Hi Merlin, Long story short - thanks for the reply, but you're not measuring anything about the parallelism of code running in a pl/pgsql environment here. You're just measuring whether postgres can parallelise entering that environment and get back out. Don't get me wrong - it's great that this scales well because it affects situations where you have lots of calls to trivial functions. However it's not the problem I'm talking about. I mean 'real' pl'pgsql functions. e.g. things that you might find in postgis or similar. If you re-read my previous email or look at par_psql (http://parpsql.com) and look at the benchmarks there you'll maybe see more about what I'm talking about. To clear up the issue I build a little test harness around your comment below. If anyone was wondering if it's par_psql itself that causes bad scaling in postgres. The answer is clearly no. :-) What I found this evening is that there are several problems here. I did some testing here using a machine with 16 physical cores and lots of memory/IO. - Using a table as a source of input rather than a fixed parameter e.g. 'select col1... ' vs. 'select 3'. Please note I am not talking about poor performance, I am talking about poor scaling of performance to multicore. There should be no reason for this when read-locks are being taken on the table, and no reason for this when it is combined with e.g. a bunch of pl/pgsql work in a function. However the impact of this problem is only seen above 8 cores where performance crashes. - Using pl/pgsql itself intensively (e.g. anything non-trivial) causes horrifically bad scaling above 2 cores on the systems I've tested and performance crashes very hard soon after. This matches what I've seen elsewhere in big projects and in par_psql's tests. Of course, it could be some wacky postgresql.conf setting (I doubt it here), so I'd be glad if others could give it a try. If you're bored, set the time to 5s and run, from testing I can tell you it shouldn't alter the results. The repo will be up in around 30 minutes time on http://github.com/gbb/ppppt, and I'm going to submit it as a bug to the pg bugs list. Graeme. On 06 Jul 2015, at 18:40, Merlin Moncure <mmoncure@xxxxxxxxx> wrote: > On Fri, Jul 3, 2015 at 9:48 AM, Graeme B. Bell <graeme.bell@xxxxxxxx> wrote: >> Hi everyone, >> >> I've written a new open source tool for easily parallelising SQL scripts in postgres. [obligatory plug: https://github.com/gbb/par_psql ] >> >> Using it, I'm seeing a problem that I've also seen in other postgres projects involving high degrees of parallelisation in the last 12 months. >> >> Basically: >> >> - I have machines here with up to 16 CPU cores and 128GB memory, very fast SSDs and controller etc, carefully configured kernel/postgresql.conf for high performance. >> >> - Ordinary queries parallelise nearly perfectly (e.g. SELECT some_stuff ...), e.g. almost up to 16x performance improvement. >> >> - Non-DB stuff like GDAL, python etc. parallelise nearly perfectly. >> >> - HOWEVER calls to CPU-intensive user-defined pl/pgsql functions (e.g. SELECT myfunction(some_stuff)) do not parallelise well, even when they are independently defined functions, or accessing tables in a read-only way. They hit a limit of 2.5x performance improvement relative to single-CPU performance (pg9.4) and merely 2x performance (pg9.3) regardless of how many CPU cores I throw at them. This is about 6 times slower than I'm expecting. >> >> I can't see what would be locking. It seems like it's the pl/pgsql environment itself that is somehow locking or incurring some huge frictional costs. Whether I use independently defined functions, independent source tables, independent output tables, makes no difference whatsoever, so it doesn't feel 'lock-related'. It also doesn't seem to be WAL/synchronisation related, as the machines I'm using can hit absurdly high pgbench rates, and I'm using unlogged tables for output. >> >> Take a quick peek here: https://github.com/gbb/par_psql/blob/master/BENCHMARKS.md >> >> I'm wondering what I'm missing here. Any ideas? > > I'm not necessarily seeing your results. via pgbench, > > mmoncure@mernix2 11:34 AM ~$ ~/pgdev/bin/pgbench -n -T 60 -f b.sql > transaction type: Custom query > scaling factor: 1 > query mode: simple > number of clients: 1 > number of threads: 1 > duration: 60 s > number of transactions actually processed: 658833 > latency average: 0.091 ms > tps = 10980.538470 (including connections establishing) > tps = 10980.994547 (excluding connections establishing) > mmoncure@mernix2 11:35 AM ~$ ~/pgdev/bin/pgbench -n -T 60 -c4 -j4 -f b.sql > transaction type: Custom query > scaling factor: 1 > query mode: simple > number of clients: 4 > number of threads: 4 > duration: 60 s > number of transactions actually processed: 2847631 > latency average: 0.084 ms > tps = 47460.430447 (including connections establishing) > tps = 47463.702074 (excluding connections establishing) > > b.sql: > select f(); > > f(): > create or replace function f() returns int as $$ begin return 1; end; > $$ language plpgsql; > > the results are pretty volatile even with a 60s run, but I'm clearly > not capped at 2.5x parallelization (my box is 4 core). It would help > if you disclosed the function body you're benchmarking. If the > problem is indeed on the sever, the next step I think is to profile > the code and look for locking issues. > > merlin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance