Re: Profiling PostgreSQL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for your answers. A script around pstack worked for me.

(I'm not sure if I should open a new thread, I hope it's OK to ask another question here)

For the workload I run it seems that PostgreSQL scales with the number of concurrent clients up to the point that these reach the number of cores (more or less).
Further increase to the number of clients leads to dramatic performance degradation. pstack and perf show that backends block on LWLockAcquire calls, so, someone could assume that the reason the system slows down is because of multiple concurrent transactions that access the same data.
However I did the two following experiments:
1) I completely removed the UPDATE transactions from my workload. The throughput turned out to be better yet the trend was the same. Increasing the number of clients, has a very negative performance impact.
2) I deployed PostgreSQL on more cores. The throughput improved a lot. If the problem was due to concurrecy control, the throughput should remain the same - no matter the number of hardware contexts. 

Any insight why the system behaves like this ?

Cheers,
Dimitris


On Fri, May 23, 2014 at 1:39 AM, Michael Paquier <michael.paquier@xxxxxxxxx> wrote:
On Thu, May 22, 2014 at 10:48 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
> Call graph data usually isn't trustworthy unless you built the program
> with -fno-omit-frame-pointer ...
This page is full of ideas as well:
https://wiki.postgresql.org/wiki/Profiling_with_perf
--
Michael


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux