On 02/25/2015 06:14 PM, Emanuel Araújo wrote:
Hi, I have an application that replicates data from an Oracle database for postgresql. The flow goes as follows: oracle.table1 -> AppSincronizador -> postgresql.table1 -> Trigger (upd, ins, del) -> postgresql.table2 I'm having situations where the data volume is large that the changes that should be in the final table are not found, getting the tables in postgresql nosync. Well, the application makes a single transaction and makes commits every 1000 records.
How large?
It is as if the triggers disabled, when manually do the operation is performed. Is there a BUG or situation where the postgresql disable these triggers?
Hard to say without seeing the trigger definition or the code it is calling.
So Version: CentOS 6.5 PostgreSQL 9.3.5 Oracle: 11G I found this POST that explain once situation. AFTER triggers are more expensive than BEFORE triggers because They must be queued up Until the statement finishes doing its work, Then executed. They are not spilled to disk if the queue gets big (at least in 9.4 and below, may change in future) are huge queues AFTER trigger can cause memory available to overrun, Resulting in the statement aborting. Link: http://dba.stackexchange.com/questions/88761/scaling-postgresql-triggers PS. Right now I'm not interested in the performance, as this take care later, but the question that there are random operations that do not complete for the final table. Thanks! -- *Atenciosamente, Emanuel Araújo* */Linux Certified, DBA PostgreSQL /*
-- Adrian Klaver adrian.klaver@xxxxxxxxxxx -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general