Re: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 20, 2010 at 8:03 PM,  <snoop@xxxxxxxx> wrote:
>> On Mon, Dec 20, 2010 at 6:23 PM,  <snoop@xxxxxxxx> wrote:
> Well, I'd prefer a big single storage instead of "too many spinning disks"
> around mainly for maintenance reasons, to avoid replication and too much
> network traffic.

Sure, but that then makes your single point of failure your storage
system.  With no replication the best you can hope for if the storage
array fails completely is to revert to a recent backup, since without
any streaming replication you'll be missing tons of data if you've got
many updates.  If your database is mostly static or the changes can be
recreated from other sources that's not so bad.  However, if you have
one storage array and it has a carastrophic hardware failure and goes
down and stay down, then you'll need something else to hold the db
while you wait for parts etc.

>> > - I don't like the idea of having fixed size (16 megs regardless of the
>> > committed transaction number!) WAL logs often "shipped" from one node to
>> > another endangering my network performance (asynchronous replication)

(P.s. that was pre 9.0 PITR...)

>> Streaming replication in 9.0 doesn't really work that way, so you
>> could use that now with a hot standby ready to be failed over to as
>> needed.
>
> Mmm, so I can use an hot standby setup without any need for replication
> (same data dir) and no need for STONITH?
> Sorry if my questions sound trivial to you but my experience with PostgreSQL
> is quite limited and this would be my first "more complex" configuration and
> I'm trying to figure out the best way to go. Unfortunately it's not that
> easy to figure it out going through the documentation only.

No no.  With streaming replication the master streams changes to the
slave(s) in real time, not via copying WAL files like in previous PITR
replication.  No need for STONITH and / or fencing since the master
database writes to the slave database.  Failover would be provided by
whatever monitoring script you want to throw at the system, maybe with
pgpool, pgbouncer, or even CARP if you wanted (I'm no fan of carp, had
a lot of problems with it and CISCO switches a while back).  Cut off
the main server,  put slave server into recovery, when recovery
finishes and it's up and running, change the IP in the app config,
bounce app and keep going.  With slony you'd do something similar, but
use slonik commands to promote the first slave to the master where
you'd bring up the slave in recovery mode in the streaming replication
method.

All of that assumes two machines with their own storage. (technically
you could put it all on the same big array in different directories).

If you want to share the same data dir then you HAVE to make sure that
only one machine at a time can ever open that directory and start
postgresql there.  Two postmasters on the same data dir are instant
death.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux