jeandavid8@xxxxxxxxxxx (Jean-David Beyer) writes: > But what is the limitation on such a thing? In this case, I am just > populating the database and there are no other users at such a time. I am > willing to lose the whole insert of a file if something goes wrong -- I > would fix whatever went wrong and start over anyway. > > But at some point, disk IO would have to be done. Is this just a function of > how big /pgsql/data/postgresql.conf's shared_buffers is set to? Or does it > have to do with wal_buffers and checkpoint_segments? I have done bulk data loads where I was typically loading hundreds of thousands of rows in as a single transaction, and it is worth observing that loading in data from a pg_dump will do exactly the same thing, where, in general, each table's data is loaded as a single transaction. It has tended to be the case that increasing the number of checkpoint segments is helpful, though it's less obvious that this is the case in 8.2 and later versions, what with the ongoing changes to checkpoint flushing. In general, this isn't something that typically needs to get tuned really finely; if you tune your DB, in general, "pretty big transactions" should generally work fine, up to rather large sizes of "pretty big." -- "cbbrowne","@","acm.org" http://linuxdatabases.info/info/languages.html "Why use Windows, since there is a door?" -- <fachat@xxxxxxxxxxxxxxxxxxxxxxx> Andre Fachat ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend