On Mon, Feb 7, 2011 at 8:17 PM, felix <crucialfelix@xxxxxxxxx> wrote: > > On Mon, Feb 7, 2011 at 6:05 AM, Shaun Thomas <sthomas@xxxxxxxxx> wrote: >> >> That’s one of the things I talked about. To be safe, PG will start to shut >> down but disallow new connections, and *that’s all*. Old connections are >> grandfathered in until they disconnect, and when they all go away, it shuts >> down gracefully. > > Well.... it said "Failed to shutdown ..............." and then returned > control. > and then proceeded to run for about an hour. > I'm not sure how graceful that is. > I generally take programs at their word. "Failed" is clearly past tense. I agree that here what pg_ctl said and what it didn't aren't exactly the same thing. > but it is one particular table on postgres that has shit the sock drawer. What queries are running slow, and what does explain analyze have to say about them? >> You should also rarely be doing count(*) on a 300k row table, even if >> everything is cached and speedy. > > I'm not > this is a test query that is obviously way out of bounds for acceptable > response. > there is something very very wrong with this table and I need to solve it > ASAP. > other tables that have less updates but similar sizes are not having this > problem. Is this the same problem you had at the beginning and were trying to fix with clustering and increasing fsm, or is this now a different table and a different problem? > there are foreign keys pointing to this table so its a bit tricky to just > refill it, but I can think of one way. I'll have to do that. > its only conjecture that the issue is file space bloat or free map problems. > those are overall issues that I will get to as soon as I can. but this is > table specific. What does the query you ran before that shows bloat show on this table now? >> That’s an application design issue you need to address before it’s too >> late, or you have to rush and implement a hasty fix. > > it is not an application design issue, though there are always improvements > being made. If your application is doing select count(*) with either no where clause or with a very non-selective one, then it is somewhat of a design issue, and there are ways to make that faster. if it's a different query, show us what it and its explain analyze look like. >> Being a DBA sucks sometimes. J > > I am not a DBA, I'm just trying to query a 300k row table. > though I am happy to learn more. I know an awful lot about a lot of things. > but you can't specialize in everything Well the good news is that there's a LOT less arcana involved in keep pgsql happy than there is in keeping something like Oracle happy. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance