Hi, My dtrigger definition is CREATE TRIGGER msg_trigger BEFORE INSERT ON msg FOR EACH ROW EXECUTE PROCEDURE msg_function(); so it seems that it is a BEFORE trigger. To be totally honest, I have "really" limited knownledge in SQL and postgresql and all these were gathered from recipes found on the web... Regards, Alessandro. On Wednesday 17 December 2014 10:41:31 Tom Lane wrote: > Torsten Zuehlsdorff <mailinglists@xxxxxxxxxxxxxxx> writes: > > How many rows is "(SELECT * FROM upsert)" returning? Without knowing > > more i would guess, that the result-set is very big and that could be > > the reason for the memory usage. > > Result sets are not ordinarily accumulated on the server side. > > Alessandro didn't show the trigger definition, but my guess is that it's > an AFTER trigger, which means that a trigger event record is accumulated > in server memory for each inserted/updated row. If you're trying to > update a huge number of rows in one command (or one transaction, if it's > a DEFERRED trigger) you'll eventually run out of memory for the event > queue. > > An easy workaround is to make it a BEFORE trigger instead. This isn't > really nice from a theoretical standpoint; but as long as you make sure > there are no other BEFORE triggers that might fire after it, it'll work > well enough. > > Alternatively, you might want to reconsider the concept of updating > hundreds of millions of rows in a single operation ... > > regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance