Search Postgresql Archives

Re: Basic Question on Point In Time Recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have you looked into Barman? http://www.pgbarman.org/ It does what you want. You can take a full daily backup and it keeps track of the WAL files to allow for a PITR. It also allows you to run the backup from one of your slaves.

The way we have it setup is as follows: We have three servers, one master and two slaves. The master ships WAL files to both slaves. One of the slaves has Barman installed on it. Barman takes a copy of the WAL files and archives it, then nightly we do a full backup from the slave.

This takes the load of the master and allows us to have a PITR with a minimal full backup of one day.

Thanks,
-Joseph Kregloh

On Wed, Mar 11, 2015 at 9:26 AM, Stéphane Schildknecht <stephane.schildknecht@xxxxxxxxxxx> wrote:
Hello,

On 11/03/2015 11:54, Robert Inder wrote:
> We are developing a new software system which is now used by a number
> of independent clients for gathering and storing live data as part of
> their day to day work.
>
> We have a number of clients sharing a single server.  It is running
> one Postgres service, and each client is a separate user with access
> to their own database.  Each client's database will contain "hundreds
> of thousands" of records, and will be supporting occasional queries by
> a small number of users.   So the system is currently running on
> "modest" hardware.
>
> To guard against the server failing, we have a standby server being
> updated by WAL files, so if the worst comes to the worst we'll only
> lose "a few minutes" work.  No problems there.
>
> But, at least while the system is under rapid development, we also
> want to have a way to roll a particular client's database back to a
> (recent) "known good" state, but without affecting any other client.
>
> My understanding is that the WAL files mechanism is installation-wide
> -- it will affect all clients alike.
>
> So to allow us to restore data for an individual client, we're running
> "pg_dump" once an hour on each database in turn.  In the event of a
> problem with one client's system, we can restore just that one
> database, without affecting any other client.
>
> The problem is that we're finding that as the number of clients grows,
> and with it the amount of data, pg_dump is becoming more intrusive.
> Our perception is that when pg_dump is running for any database,
> performance on all databases is reduced.  I'm guessing this is because
> the dump is making heavy use of the disk.

One way you could choose is to have a server acting as WAL archiver.

pg_basebackup your slave every day, and store all WAL until new pg_basebackup
is taken.

Whenever you have to restore a single customer, you could recover the whole
instance up to the time *before* the worst happend and pg_dump the customer,
and pg_restore it.

Doing that, you won't have to pg_dump avery one hour or so all of your databases.



>
> There is obviously scope for improving performance by getting using
> more, or more powerful, hardware.  That's obviously going to be
> necessary at some point, but it is obviously an expense that our
> client would like to defer as long as possible.
>
> So before we go down that route, I'd like to check that we're not
> doing something dopey.
>
> Is our current "frequent pg_dump" approach a sensible way to go about
> things.  Or are we missing something?  Is there some other way to
> restore one database without affecting the others?
>
> Thanks in advance.
>
> Robert.
>


--
Stéphane Schildknecht
Contact régional PostgreSQL pour l'Europe francophone
Loxodata - Conseil, expertise et formations
06.17.11.37.42



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux