Hello, I'm starting to use slony as a redundancy solution for the project I'm currently working on. Running SuSE Linux 9 where one machine contains the prime database and the second machine contains the backup database. The Slony version I'm using is 1.1.2. If some of the issues have been addressed in the newer version of Slony, please let me know. I have looked at the Nagios scripts and others and am still left with questions regarding how to dynamically determine who is slave and who is master during normal and failover operations. Take a scenario that you want to check the state of the system without prior knowledge of the node setup, how would you determine which machine is the prime and which one is the slave? Also I'm having issues with the slonik script (below) that is supposed to handle the failover to the slave in case of master failure. For some reason it hangs and I was wondering if there are known issues with it. The test condition I'm working with is: reboot the master, the slave is supposed to take over. slonik <<_EOF_ # ---- # This defines which namespace the replication system uses # ---- cluster name = $CLUSTER; # ---- # Admin conninfo's are used by the slonik program to connect # to the node databases. So these are the PQconnectdb arguments # that connect from the administrators workstation (where # slonik is executed). # ---- node 1 admin conninfo = 'dbname=$DBNAME1 host=$HOST1 port=5432 user=$SLONY_USER1'; node 2 admin conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER2'; # ---- # Node 2 subscribes set 1 # ---- failover ( id = 1, backup node = 2); _EOF_ Thanks a lot for your help, Slawek