Re: Hmmm... why does CPU-intensive pl/pgsql code parallelise so badly when queries parallelise fine? Anyone else seen this?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09 Jul 2015, at 15:22, Thomas Kellerer <spam_eater@xxxxxxx> wrote:

> Graeme B. Bell schrieb am 09.07.2015 um 11:44:
>> I don't recall seeing a clear statement telling me I should mark pl/pgsql
>> functions nonvolatile wherever possible or throw all performance and
>> scalability out the window. 
> 
> From: http://www.postgresql.org/docs/current/static/xfunc-volatility.html
> 
>   "For best optimization results, you should label your functions 
>    with the strictest volatility category that is valid for them."


Hi Thomas,

Thank you very much for the link.

However, the point I was making wasn't that no sentence exists anywhere. My point was that I've read the docs more than anyone else in my institute and I was completely unaware of this. 

It also quite vague - if you hand that to a younger programmer in particular, how do they implement it in practice? When is it important to do it?  If this one factor silently breaks multiprocessor scaling of pl/pgsql, and multiprocessing is the biggest trend in CPU processing of the last decade (comparing server CPUS of 2005 with 2015), then why is this information not up front and clear?


A second point to keep in mind that optimization and parallelisation/scalability are not always the same thing. 

For example, in one project I took a bunch of looped parallel UPDATEs on a set of 50 tables, and rewrote them so as to run the loop all at once inside a pl/pgsql function. Crudely, I took out the table-level for loop and put it at row-level instead. 

I expected they'd execute much faster if UPDATEs were using data still in cache. Also, I would be updating without writing out WAL entries to disk repeatedly. 

It turns out the update per row ran much faster - as expected - when I used one table, but when I ran it in parallel on many tables, the performance was even worse than when I started. If you look at the benchmarks, you'll see that performance drops through the floor at 8-16 cores. I think that was when I first noticed this bug/feature.

[If anyone is curious, the way I solved that one in the end was to pre-calculate every possible way the tables might be updated after N loops of updates using Python, and import that as a lookup table into PG. It turns out that although we had 10's of GB of data per table, there were only about 100,00 different types of situation, and only e.g. 80 iterations to consider). Then I ran a single set of UPDATEs with no pl/pgsql. It was something like a 10000x performance improvement.]

Graeme.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux