Tim Uckun <timuckun@xxxxxxxxx> writes: > I want to write a trigger which runs semi-complicated code after each insert. I have done some reading and from what I can gather this could cause problems because > after insert triggers "don't spill to the disk" and can cause queue problems.  Many people suggest LISTEN NOTIFY but that's not going to help me because my daemons > could be offline and I would lose records. > > I have two questions. > > There are some hints out there that it could be possible to do asynchronous triggers based on dblink but I haven't seen any documentation or examples of this.  Is > there a writeup someplace about this? > > Secondly I had the idea of "partitioning" the trigger processing by > partitioning the table and then putting a trigger on each child > table. This way theoretically I could be running the triggers > in parallel. Is my presumption correct here? If I only > have one table the trigger calls get queued up one at a time but if I > partition my table into N tables I am running N triggers > simultaneously? > False on both counts. Nothing to prevent concurrent firing of same trigger on same table given multi session concurrent insert. Nothing to prevent contention related single-threading of any triggers firing for whatever reason if the code they are running will result in lock contention with other sessions. Just like 2 or more sessions trying to update the same row, you are going to single thread around such an operation like it or not. You need to tell us a lot more about your problem and what the triggers do. > Thanks. > -- Jerry Sievers Postgres DBA/Development Consulting e: postgres.consulting@xxxxxxxxxxx p: 312.241.7800 -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general