Re: Improve COPY performance for large data sets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Le mercredi 10 septembre 2008, Ryan Hansen a écrit :
> One thing I'm experiencing some trouble with is running a COPY of a
> large file (20+ million records) into a table in a reasonable amount of
> time.  Currently it's taking about 12 hours to complete on a 64 bit
> server with 3 GB memory allocated (shared_buffer), single SATA 320 GB
> drive.  I don't seem to get any improvement running the same operation
> on a dual opteron dual-core, 16 GB server.

You single SATA disk is probably very busy going from reading source file to 
writing data. You could try raising checkpoint_segments to 64 or more, but a 
single SATA disk won't give you high perfs for IOs. You're getting what you 
payed for...

You could maybe ease the disk load by launching the COPY from a remote (local 
netword) machine, and while at it if the file is big, try parallel loading 
with pgloader.

Regards,
-- 
dim

Attachment: signature.asc
Description: This is a digitally signed message part.


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux