Bill Moran <wmoran@xxxxxxxxxxxxxxxxx> writes: > In response to "tamanna madaan" <tamanna.madan@xxxxxxxxxxxxxxx>: >> >> I am using postgres 8.1.2 with slony 1.1.5 used for replication >> between two nodes. Very high number of db operations like (2.8 >> million inserts, 1.4 million update and 4.5 lakhs deletes.) are being >> done on db in one transaction and this is repeated for 5-6 times a >> day at an interval of let say 2 hours. This process is runnning for 5 >> consective days. It is obeserved that db is getting very slow with >> time. The number of dead tuples getting increased in pg_listener, >> sl_log_1 and sl_seqlog tables with time though I have autovacuum >> enabled and slony related tables like (sl_log_1 , sl_seqlog etc) are >> configured not to be processed by autovacuum . Please let me know >> what could be the reason of increasing dead tuples in these tables >> and postgres getting slow. > > The slony docs state, and I quote: > "Long running transactions are Evil" > http://slony.info/documentation/slonyadmin.html#BESTPRACTICES > > I'm going to guess that a single transaction with multi millions of > tuple changes is about as evil as it gets. > > This is a known shortcoming of Slony. You're going to need carefully > tuned vacuum, well designed schema, adequate hardware, and clever > schema design to keep a system like that healthy. That is, if you can't > figure out a way to avoid the huge transactions. Ok, if we're going to ask about the environment leading to the problem, I have to point this: http://wiki.postgresql.org/wiki/Londiste_Tutorial#Londiste_is_eating_all_my_CPU_and_lag_is_raising With Skytools (Londiste), you can set pgq_lazy_fetch such as the replicas are using cursors to consume big batches of events, and it runs smoothly. HTH, Regards, -- dim -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general