Search Postgresql Archives

Re: Basic Question on Point In Time Recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



​Hi Robert...​

On Wed, Mar 11, 2015 at 11:54 AM, Robert Inder <robert@xxxxxxxxxxxxxxxxx> wrote:
Is our current "frequent pg_dump" approach a sensible way to go about
things.  Or are we missing something?  Is there some other way to
restore one database without affecting the others?

​As you've been told before, pg_dump is the way to go and it hits hard on the IO load. Also, depending on where you are dumping to you may be hitting yourself on the foot ( dump to another disk, or on another machine ).

You may try streaming replication + pg_dump, we are currently doing this, although not in your exact scenario.

This is, build an streaming replication slave, pg_dump from the slave. If needed, restore in the master.

The thing is you can use desktop class machines for the slave. If you do not have spare machines I would suggest a desktop class machine with big RAM and whatever disks you need for the DB plus an extra disk to pg_dump to ( so pg_dump does not compete with DB for the db disks, this really kills performance ). Replication slaves do not need that much RAM ( as the only query it is going to run is the pg_dump ones, but desktop ram is cheap ). We did this with a not so powerful desktop with an extra sata disk to store the pg_dumps and it worked really well, and we are presently using two servers, using one of the extra gigabit interfaces with a crossover cable for the replication connection plus an extra sata disk to make hourly pg_dumps and it works quite well.

Francisco Olarte.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux