Mark Makarowsky wrote:
I have a table with 4,889,820 records in it. The table also has 47 fields. I'm having problems with update performance. Just as a test, I issued the following update: update valley set test='this is a test' This took 905641 ms. Isn't that kind of slow?
The limiting factor here will be how fast you can write to your disk. Let's see: 5 million rows in ~900 seconds, that's about 5500 rows/second. Now, you don't say how large your rows are, but assuming each row is say 1kB that'd be 5.5MB/sec - or not brilliant. Simplest way to find out total activity is check how much disk space PG is using before and after the update.
What you'll need to do is monitor disk activity, in particular how many writes and how much time the processor spends waiting on writes to complete.
If your standard usage pattern is to update a single field in all rows of a large table, then performance isn't going to be sub-second I'm afraid.
-- Richard Huxton Archonet Ltd ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match