Hi,
--
I have an application that replicates data from an Oracle database for postgresql. The flow goes as follows:
oracle.table1 -> AppSincronizador -> postgresql.table1 -> Trigger (upd, ins, del) -> postgresql.table2
I'm having situations where the data volume is large that the changes that should be in the final table are not found, getting the tables in postgresql nosync. Well, the application makes a single transaction and makes commits every 1000 records.
It is as if the triggers disabled, when manually do the operation is performed. Is there a BUG or situation where the postgresql disable these triggers?
So Version: CentOS 6.5
PostgreSQL 9.3.5
Oracle: 11G
I found this POST that explain once situation.
AFTER triggers are more expensive than BEFORE triggers because They must be queued up Until the statement finishes doing its work, Then executed. They are not spilled to disk if the queue gets big (at least in 9.4 and below, may change in future) are huge queues AFTER trigger can cause memory available to overrun, Resulting in the statement aborting.
PS. Right now I'm not interested in the performance, as this take care later, but the question that there are random operations that do not complete for the final table.
Thanks!
Atenciosamente,
Emanuel Araújo
Linux Certified, DBA PostgreSQL
Emanuel Araújo
Linux Certified, DBA PostgreSQL