Thanks for the reply. The database is vacuum analysed regularly and during my testing I tried running the vacuum analyse full immediately before the running through the set of queries (which does help a bit - reduces the time to about 80% but is is still over an hour, with basically 100% CPU). I'll get back to you with the full explain analyse output (I need to re-create my test database back to its original state and that takes a while) but I assume the part you're after is that all queries are sequential scans, which I initially thought was the problem. But it is my understanding that I cannot make them index scans because a large percentage of the table is being returned by the query (typically 30%) so the planner will favour a sequential scan over an index scan for such a query, correct? If the queries had been disk bound (due to retrieving large amounts of data) I would have understood but I am confused as to why a sequential scan would cause such high CPU and not high disk activity. Yes, I wish I could upgrade to the latest version of PostgreSQL but at the moment my hands are tied due to dependencies on other applications running on our server (obviously we need to update certain queries, e.g. delete .. using.. and test with 8.1 first) - I will be pushing for an upgrade as soon as possible. And the fsync=false is a "compromise" to try to improve performance (moving to 8.1 would be better compromise). Neil On 10/07/06, Jeff Frost <jeff@xxxxxxxxxxxxxxxxxxxxxx> wrote:
On Mon, 10 Jul 2006, Neil Hepworth wrote: > I am running PostgreSQL 7.3 on a Linux box (RHEL 2.1 - Xeon 2.8GHz > with 1GB of RAM) and seeing very high CPU usage (normally over 90%) > when I am running the following queries, and the queries take a long > time to return; over an hour! First off, when is the last time you vacuum analyzed this DB and how often does the vacuum analyze happen. Please post the EXPLAIN ANALYZE output for each of the queries below. Also, I would strongly urge you to upgrade to a more recent version of postgresql. We're currently up to 8.1.4 and it has tons of excellent performance enhancements as well as helpful features such as integrated autovacuum, point in time recovery backups, etc. Also, I see that you're running with fsync = false. That's quite dangerous especially on a production system. > > CREATE TEMPORARY TABLE fttemp1600384653 AS SELECT * FROM ftoneway LIMIT 0; > > INSERT INTO fttemp1600384653 SELECT epId, TO_TIMESTAMP(start, > 'YYYY-MM-DD HH24:00:00.0')::timestamp AS start, 60 AS consolidation, > SUM(cnt) AS cnt FROM ftone WHERE consolidation = 0 AND start < > TO_TIMESTAMP('2006-06-27 18:43:27.391103+1000', 'YYYY-MM-DD > HH24:00:00.0')::timestamp; > > DELETE FROM ONLY ftone WHERE ftoneway.epId= fttemp1600384653.epId; > > The only changes I've made to the default postgresql.comf file are listed > below: > > LC_MESSAGES = 'en_US' > LC_MONETARY = 'en_US' > LC_NUMERIC = 'en_US' > LC_TIME = 'en_US' > tcpip_socket = true > max_connections = 20 > effective_cache_size = 32768 > wal_buffers = 128 > fsync = false > shared_buffers = 3000 > max_fsm_relations = 10000 > max_fsm_pages = 100000 > > The tables are around a million rows but when when I run against > tables of a few hundred thousand rows it still takes tens of minutes > with high CPU. My database does have a lot of tables (can be several > thousand), can that cause performance issues? > > Thanks, > Neil > > ---------------------------(end of broadcast)--------------------------- > TIP 3: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faq > > -- Jeff Frost, Owner <jeff@xxxxxxxxxxxxxxxxxxxxxx> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 650-780-7908 FAX: 650-649-1954