Re: Really really slow select count(*)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> I checked the site, everything was normal.  I went in via psql and tried some

> queries for about half an hour and continued to monitor the site.  then I went

> to bed at 7am (EU time).

> 

> Why did it shutdown so much later ?

 

Thatâs one of the things I talked about. To be safe, PG will start to shut down but disallow new connections, and *thatâs all*. Old connections are grandfathered in until they disconnect, and when they all go away, it shuts down gracefully.

 

pg_ctl âD /my/pg/dir stop âm fast

pg_ctl âD /my/pg/dir start

 

Is what you wanted.

 

> I have never restarted postgres before, so this was all new to me.  I apologize

> that I wasn't born innately with such knowledge.

 

Forget about it. But you need to learn your tools. Restarting the DB server is something youâll need to do occasionally. Just like restarting your Django proxy or app. You need to be fully knowledgeable about every part of your tool-chain, or at least the parts youâre responsible for.

 

> I double checked opinions against different people and I searched for more docs

> on that param.  Amazingly none of the ones I found commented on the shared

> memory issue and I didn't even understand the docs discussing shared memory

> because it didn't seem to apply to what I was doing.

 

Thatâs no coincidence. Iâve seen that complaint if you increase shared_buffers, but not for max_fsm_pages. I guess Iâm so used to bumping up shmmax and shmall that I forget how low default systems leave those values. But you do need to increase them. Every time. Theyâre crippling your install in more ways than just postgres.

 

So far as your Django install, have you activated the memcache contrib. module? Your pages should be lazy-caching and rarely depend on the DB, if they can. You should also rarely be doing count(*) on a 300k row table, even if everything is cached and speedy. 300k row tables have nasty habits of becoming 3M row tables (or more) after enough time, and no amount of cache will save you from counting that. Itâll take 1 second or more every time eventually, and then youâll be in real trouble. Thatâs an application design issue you need to address before itâs too late, or you have to rush and implement a hasty fix.

 

I suggest setting your log_min_duration to 1000, so every query that takes longer than 1 second to execute is logged in your postgres logs. You can use that to track down trouble spots before they get really bad. Thatâs normally aggressive enough to catch the real problem queries without flooding your logs with too much output.

 

Being a DBA sucks sometimes. J

 


See http://www.peak6.com/email_disclaimer.php for terms and conditions related to this email

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux