Search Postgresql Archives

Re: Practical maximums (was Re: PostgreSQL theoretical

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2006-07-31 at 09:53 -0500, Ron Johnson wrote:

> > The evasive answer is that you probably don't run regular full pg_dump 
> > on such databases.
> 
> Hmmm.
> 

You might want to use PITR for incremental backup or maintain a standby
system using Slony-I ( www.slony.info ). 

> >> Are there any plans of making a multi-threaded, or even
> >> multi-process pg_dump?
> > 
> > What do you hope to accomplish by that?  pg_dump is not CPU bound.
> 
> Write to multiple tape drives at the same time, thereby reducing the
> total wall time of the backup process.

pg_dump just produces output. You could pretty easily stripe that output
across multiple devices just by using some scripts. Just make sure to
write a script that can reconstruct the data again when you need to
restore. You don't need multi-threaded pg_dump, you just need to use a
script that produces multiple output streams. Multi-threaded design is
only useful for CPU-bound applications.

Doing full backups of that much data is always a challenge, and I don't
think PostgreSQL has limitations that another database doesn't.

Regards,
	Jeff Davis



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux