Re: Load experimentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 7, 2009 at 1:12 PM, Ben Brehmer <benbrehmer@xxxxxxxxx> wrote:
Hello All,

I'm in the process of loading a massive amount of data (500 GB). After some initial timings, I'm looking at 260 hours to load the entire 500GB. 10 days seems like an awfully long time so I'm searching for ways to speed this up. The load is happening in the Amazon cloud (EC2), on a m1.large instance:
-7.5 GB memory
-4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)
-64-bit platform


So far I have modified my postgresql.conf  file (PostgreSQL 8.1.3). The modifications I have made are as follows:

    Can you go with PG 8.4?  That's a start :-)

shared_buffers = 786432
work_mem = 10240
maintenance_work_mem = 6291456
max_fsm_pages = 3000000
wal_buffers = 2048
checkpoint_segments = 200
checkpoint_timeout = 300
checkpoint_warning = 30
autovacuum = off

   I'd set fsync=off for the load, I'd also make sure that you're using the COPY command (on the server side) to do the load.
 

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux