Copy database performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello there;

I've got an application that has to copy an existing database to a new database on the same machine.

I used to do this with a pg_dump command piped to psql to perform the copy; however the database is 18 gigs large on disk and this takes a LONG time to do.

So I read up, found some things in this list's archives, and learned that I can use createdb --template=old_database_name to do the copy in a much faster way since people are not accessing the database while this copy happens.


The problem is, it's still too slow. My question is, is there any way I can use 'cp' or something similar to copy the data, and THEN after that's done modify the database system files/system tables to recognize the copied database?

For what it's worth, I've got fsync turned off, and I've read every tuning thing out there and my settings there are probably pretty good. It's a Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and here's some stats:

shared_buffers = 300000
work_mem = 102400
maintenance_work_mem = 1024000

bgwriter_lru_maxpages=0
bgwriter_lru_percent=0

fsync = off
wal_buffers = 128
checkpoint_segments = 64


Thank you!


Steve Conley


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux