Well this analyze just took 12 minutes... Stats target of 100. # time psql xxx xxx -c "analyze elem_trafficstats_1" ANALYZE real 12m1.070s user 0m0.001s sys 0m0.015s A large table, but by far, not the largest... Have about 1 dozen or so tables like this, so analyzing them will take 3-4 hours of time... No weird datatypes, just bigints for facts, timestamptz and ints for dimensions. My problem is not the analyze itself, it's the fact that our db is really busy doing stuff.... Analyze I/O is competing... I am random I/O bound like crazy. If I set the stats target to 10, I get # time psql xxxx xxx -c "set session default_statistics_target to 10;analyze elem_trafficstats_1" ANALYZE real 2m15.733s user 0m0.009s sys 0m2.255s Better, but not sure what side affect this would have. > -----Original Message----- > From: Tom Lane [mailto:tgl@xxxxxxxxxxxxx] > Sent: Friday, March 10, 2006 1:31 PM > To: Marc Morin > Cc: pgsql-performance@xxxxxxxxxxxxxx > Subject: Re: [PERFORM] Trouble managing planner for > timestamptz columns > > "Marc Morin" <marc@xxxxxxxxxxxx> writes: > > We tend to analyze these tables every day or so and this doesn't > > always prove to be sufficient.... > > Seems to me you just stated your problem. Instead of having > the planner make wild extrapolations, why not set up a cron > job to analyze these tables more often? Or use autovacuum > which will do it for you. > > > Since the table is so large and the system is busy (disk > not idle at > > all), doing an analyze on this table in the production > system can take > > 1/2 hour! (statistics collector set to 100). > > I'd believe that for vacuum analyze, but analyze alone should > be cheap. > Have you perhaps got some weird datatypes in the table? > Maybe you should back off the stats target a bit? > > We do support analyzing selected columns, so you might try > something like a cron job analyzing only the timestamp > column, with a suitably low stats target for that column. > This would yield numbers far more reliable than any > extrapolation the planner could do. > > regards, tom lane >