Re: to many locks held

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <jeisonb@xxxxxxxxxxxxxxxx> wrote:
Hi everybody,

In recent days, we have seen many processes in reaching the lock held 5000.


Do you know what queries are holding locks? Is that behaviour expected?

 
At that time my machine will become sluggish and no response from the database. I tried to change configuration parameters, but have not found anything satisfactory. further in meeting log messages like the following:
COTidleERROR: out of memory
COTidleDETAIL: Can not enlarge string buffer container containing 0 bytes by 1476395004 more bytes.

I've never come across that message before, so someone wiser will need to comment on that.
 
COTidleLOG: incomplete message from client
COTUPDATE waitingLOG: process 20761 still waiting for ShareLock on transaction 10,580,510 1664,674 ms after

My machine is on linux postgres version 9.2.2, and the following settings:

You will want to upgrade to the latest point release (9.2.4) as there was a serious security vulnerability fixed in 9.2.3. Details: http://www.postgresql.org/about/news/1446/
 

memory ram: 128 GB
cores: 32

max_connections: 900

I would say you might be better off using a connection pooler if you need this many connections.


work_mem = 1024MB

work_mem is pretty high. It would make sense in a data warehouse-type environment, but with a max of 900 connections, that can get used up in a hurry. Do you find your queries regularly spilling sorts to disk (something like "External merge Disk" in your EXPLAIN ANALYZE plans)?

Have you looked at swapping and disk I/O during these periods of sluggishness?



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux