Re: Hot Standby vs slony

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Feb 8, 2018 at 1:09 PM, Mark Steben <mark.steben@xxxxxxxxxxxxxxxxx> wrote:
Good afternoon,

We currently run postgres 9.4  We currently run the following:

                            ----------------------------------------------|
----------------         |--> slony (reporting, hi-availabilty) |
production | ----->| ---------------------------------------------                                                      |
----------------         |
                            |--------------------------------------|
                            |-> hot standby (dr)                |
                            ---------------------------------------|

 We would like to replace slony with another instance of hot standby as follows:


                           ----------------------------------------------|
----------------         |--> hot standby1  (reporting, ha)  |
production | ----->| --------------------------------------------|                                                      |
----------------         |
                            |--------------------------------------|
                            |-> hot standby2 (dr)              |
                            ---------------------------------------|

Is this possible?  I see in the documentation it is possible for warm standby but don't
see a confirmation in the section on hot standby.


Yes, you can run multiple hot standby's from the primary, or cascade the hot standby's from each other (and combinations of both).

I can say that with confidence as one of the common configurations I'm running (for roughly 1500 servers) consists of a primary PG cluster with a hot standby using streaming replication (async replication) within the same data centre, with a remote "primary" hot standby fed by WAL shipping, and a remote hot standby ​streaming off that. The remote primary is running with delayed WAL application, which varies between 1 and 4 hours, depending on the class of replica sets we are running. This configuration covers basic DR, HA, and in case of user-error we can fail over (promote the remote primary replica before any user-destructive changes are applied to the remote hot standby). One of the caveats is that a sudden interruption between DC's followed by a failover could result in some data loss, depending on the archive_timeout/WAL switch rate etc, but that's a business RPO that we've agreed upon with clients.


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux