On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote: > Mark Makarowsky wrote: > >I have a table with 4,889,820 records in it. The > >table also has 47 fields. I'm having problems with > >update performance. Just as a test, I issued the > >following update: > > > >update valley set test='this is a test' > > > >This took 905641 ms. Isn't that kind of slow? > > The limiting factor here will be how fast you can write to your disk. Well, very possibly how fast you can read, too. Using your assumption of 1k per row, 5M rows means 5G of data, which might well not fit in memory. And if the entire table's been updated just once before, even with vacuuming you're now at 10G of data. -- Decibel!, aka Jim Nasby decibel@xxxxxxxxxxx EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Attachment:
pgpw47CwnsBk4.pgp
Description: PGP signature