Search Postgresql Archives

Re: dump of 700 GB database

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Note that cluster on a randomly ordered large table can be 
> prohibitively slow, and it might be better to schedule a 
> short downtime to do the following (pseudo code)
> alter table tablename rename to old_tablename; create table 
> tablename like old_tablename; insert into tablename select * 
> from old_tablename order by clustered_col1, clustered_col2;

That sounds like a great idea if that saves time.
 
>> (creating and moving over FK references as needed.)
>> shared_buffers=160MB, effective_cache_size=1GB, 
>> maintenance_work_mem=500MB, wal_buffers=16MB, 
>> checkpoint_segments=100
 
> What's work_mem set to?
work_mem = 32MB

> What ubuntu?  64 or 32 bit?  
It?s a 32 bit. I don?t know if 4GB files doesn't sound to small of a dump
for originally 350GB big db - nor why pg_restore fails... 

> Have you got either a file 
> system or a set of pg tools limited to 4Gig file size?  
Not sure what is the problem on my server - I'm trying to figure out what
has pg_restore fail...


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux