Subbiah, Stalin wrote:
Hello All,
This query runs forever and ever. Nature of this table being lots of
inserts/deletes/query, I vacuum it every half hour to keep the holes
reusable and nightly once vacuum analyze to update the optimizer. We've
got index on eventtime only. Running it for current day uses index range
scan and it runs within acceptable time. Below is the explain of the
query. Is the order by sequencenum desc prevents from applying limit
optimization?
explain SELECT *
FROM EVENTLOG
WHERE EVENTTIME>'07/23/06 16:00:00'
AND EVENTTIME<'08/22/06 16:00:00'
AND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA'
OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA'
OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA')
ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 0;
QUERY PLAN
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
-------------------------------------------------------------
Limit (cost=15546930.29..15546931.54 rows=500 width=327)
-> Sort (cost=15546930.29..15581924.84 rows=13997819 width=327)
Sort Key: eventtime, sequencenum
-> Seq Scan on eventlog (cost=0.00..2332700.25 rows=13997819
width=327)
Filter: ((eventtime > '2006-07-23 16:00:00'::timestamp
without time zone) AND (eventtime < '2006-08-22 16:00:00'::timestamp
without time zone) AND (((objdomainid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))
(5 rows)
Thanks,
Stalin
Pg version 8.0.1, suse 64bit.
Firstly you should update to 8.0.8 - because it's in the same stream you
won't need to do a dump/initdb/reload like a major version change, it
should be a simple upgrade.
Can you send explain analyze instead of just explain?
It sounds like you're not analyz'ing enough - if you're doing lots of
updates/deletes/inserts then the statistics postgresql uses to choose
whether to do an index scan or something else will quickly be outdated
and so it'll have to go back to a full table scan every time..
Can you set up autovacuum to handle that for you more regularly?
--
Postgresql & php tutorials
http://www.designmagick.com/