Hi, I don't really know how Slony works internally but I know that it uses some triggers for getting out new transactions. The config of my Slony cluster: --- CONFIG START --- cluster name = $CLUSTERNAME; node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST port=$MASTERPORT user=$REPLICATIONUSER password=$MASTERPASS'; node 2 admin conninfo = 'dbname=$SLAVE1DBNAME host=$SLAVE1HOST port=$SLAVE1PORT user=$REPLICATIONUSER password=$SLAVE1PASS'; init cluster ( id=1, comment = $CLUSTERNAME); create set (id=1, origin=1, comment=$CLUSTERTABLE1); set add table (set id=1, origin=1, id=1, fully qualified name = 'public.[table]', comment=$CLUSTERTABLE1); --- CONFIG END --- Of course I have also other tables in the same database, which I don't want to cluster. When making big data dumps into that tables I noticed that since I'm using Slony on that database, buf for another table, the big dumps are getting 5-10 times slower. The only explanation I have is that it has maybe something to do with Slony - because nothing else have been changed. May be Slony do in each row dump of the process any things which can slow down the process - even if Slony should not care about this other table?! Thanks, Aldor ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match