Luke Lonergan wrote:
Greg,
On 10/30/06 7:09 AM, "Spiegelberg, Greg" <gspiegelberg@xxxxxxxxxx> wrote:
I broke that file into 2 files each of 550K rows and performed 2
simultaneous COPY's after dropping the table, recreating, issuing a sync
on the system to be sure, &c and nearly every time both COPY's finish in
12 seconds. About a 20% gain to ~91K rows/second.
Admittedly, this was a pretty rough test but a 20% savings, if it can be
put into production, is worth exploring for us.
Did you see whether you were I/O or CPU bound in your single threaded COPY?
A 10 second "vmstat 1" snapshot would tell you/us.
With Mr. Workerson (:-) I'm thinking his benefit might be a lot better
because the bottleneck is the CPU and it *may* be the time spent in the
index building bits.
We've found that there is an ultimate bottleneck at about 12-14MB/s despite
having sequential write to disk speeds of 100s of MB/s. I forget what the
latest bottleneck was.
I have personally managed to load a bit less then 400k/s (5 int columns
no indexes) - on very fast disk hardware - at that point postgresql is
completely CPU bottlenecked (2,6Ghz Opteron).
Using multiple processes to load the data will help to scale up to about
900k/s (4 processes on 4 cores).
Stefan