"Warren Bell" <warren@xxxxxxxxxxxxxxxxxxx> writes: > I have a table with 36 fields that slows down quite a bit after some light > use. There are only 5 clients connected to this DB and they are doing mostly > inserts and updates. There is no load on this server or db at all. This > table has had no more than 10,000 records and is being accesessd at the rate > of once per 5 seconds. It will slow down quite a bit. It will take 10 > seconds to do a `SELECT * FROM` query. I delete all records except one > perform a VACUUM and this will not speed it up. I drop the table and > recreate it and insert one record and it speeds right back up takeing only > 100 ms to do the query. It sounds to me like the table needs to be vacuumed vastly more often than you are doing. (You could confirm this by using VACUUM VERBOSE and noting how big it says the table is physically --- what you need to do is vacuum often enough to keep the table size in check.) You might consider setting up pg_autovacuum. It's also worth asking whether you have indexes set up to handle your common queries --- normally, only sequential-scan queries are really sensitive to the physical table size. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq