* Jay Greenfield (jag@xxxxxxxxxxxxx) wrote: > Database has one table with 1.2 million rows > Query: > > UPDATE ntdn SET gha=area/10000 > > I could post the EXPLAIN ANALYZE results but its 4,000+ lines long How do you get 4,000+ lines of explain analyze for one update query in a database with only one table? Something a bit fishy there. Perhaps you mean explain verbose, though I don't really see how that'd be so long either, but it'd be closer. Could you provide some more sane information? > I've run various tests on a number of Postgres parameters; none of which > have come close to Access' time of 5.00 min. Postgres times range between > 24 min and 121 min. > > Some of the Postgres variables and ranges I've tested. > work_mem: 1,000 to 2,000,000 > temp_buffers: 1,000 to 10,000 > shared_buffers: 1,000 to 64,000 > sort_mem: 1,024,000 > fsync on / off > > Why does Access run so much faster? How can I get Postgres to run as fast > as Access? While it's true that Access almost certainly takes some shortcuts, 24 minutes for an update across 1.2 millon rows seems an awefully long time for Postgres. Is this table exceptionally large in same way (ie: lots of columns)? I expect running with fsync off would be closer to 'Access mode' though it has risks (of course). Also, it might be faster to insert into a seperate table rather than run a huge update like that in Postgres. Also, if there are indexes on the table in question, you might drop them before doing the update/insert and recreate them after the query has finished. You really havn't provided anywhere near enough information to figure out what the actual problem is here. Access does take shortcuts but the times you're posting for Postgres seem quite far off based on the hardware and commands you've described... Thanks, Stephen
Attachment:
signature.asc
Description: Digital signature