> On Mon, Dec 20, 2010 at 8:03 PM, <snoop@xxxxxxxx> wrote: > >> On Mon, Dec 20, 2010 at 6:23 PM, <snoop@xxxxxxxx> wrote: > > Well, I'd prefer a big single storage instead of "too many spinning disks" > > around mainly for maintenance reasons, to avoid replication and too much > > network traffic. > > Sure, but that then makes your single point of failure your storage > system. With no replication the best you can hope for if the storage > array fails completely is to revert to a recent backup, since without > any streaming replication you'll be missing tons of data if you've got > many updates. If your database is mostly static or the changes can be > recreated from other sources that's not so bad. However, if you have > one storage array and it has a carastrophic hardware failure and goes > down and stay down, then you'll need something else to hold the db > while you wait for parts etc. I'd use a filesystem replication solution like DRBD to avoid one disk array being a single point of failure. > > >> > - I don't like the idea of having fixed size (16 megs regardless of the > >> > committed transaction number!) WAL logs often "shipped" from one node to > >> > another endangering my network performance (asynchronous replication) > > (P.s. that was pre 9.0 PITR...) > > >> Streaming replication in 9.0 doesn't really work that way, so you > >> could use that now with a hot standby ready to be failed over to as > >> needed. > > > > Mmm, so I can use an hot standby setup without any need for replication > > (same data dir) and no need for STONITH? > > Sorry if my questions sound trivial to you but my experience with PostgreSQL > > is quite limited and this would be my first "more complex" configuration and > > I'm trying to figure out the best way to go. Unfortunately it's not that > > easy to figure it out going through the documentation only. > > No no. With streaming replication the master streams changes to the > slave(s) in real time, not via copying WAL files like in previous PITR > replication. No need for STONITH and / or fencing since the master > database writes to the slave database. Failover would be provided by > whatever monitoring script you want to throw at the system, maybe with > pgpool, pgbouncer, or even CARP if you wanted (I'm no fan of carp, had > a lot of problems with it and CISCO switches a while back). Cut off > the main server, put slave server into recovery, when recovery > finishes and it's up and running, change the IP in the app config, > bounce app and keep going. With slony you'd do something similar, but > use slonik commands to promote the first slave to the master where > you'd bring up the slave in recovery mode in the streaming replication > method. > > All of that assumes two machines with their own storage. (technically > you could put it all on the same big array in different directories). > > If you want to share the same data dir then you HAVE to make sure that > only one machine at a time can ever open that directory and start > postgresql there. Two postmasters on the same data dir are instant > death. OK, I still have to study a lot this technology before prior to keep on but now I know where to look and how. Thank you very much for your time! I really appreciate that. -- Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP autenticato? GRATIS solo con Email.it: http://www.email.it/f Sponsor: Paghe e stipendi, consulenza e collocamento, tutto con Emailpaghe! Provalo gratuitamente fino al 31/12/2010 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=10682&d=20101221 -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin