On 5/25/06, Rafal Pietrak <rafal@xxxxxxxxxxxxxxxxxx> wrote:
I'd like to propose a 'syntax/semantics' of such trigger: Triggers normally execute inside of a transaction. A COMMIT within a trigger could mean: "do a fork: fork-1) return to the main and schedule COMMIT there, fork-2) continue in bacground".
I don't think fork(2)ing a running backed is a good idea, probably it would end up with major data corruption. You want to call fork(2) in your application. Something like: "if (fork()==0) { reestablish connection, issue trigger-code on the database } else { gracefully return };"
>From the perspective of my earlier applications, it would be desired to have an argument for such COMMIT - a label, which is a *global* database object (may be just a semaphore), and is: 1) Assuming semaphore implementation - semaphore should be tested and fail if already asserted, NOT tesed an block. FORK should be initiated only if semaphore test succeeds. 2) the execution of procedure within fork-2 (meaning, after semaphore assertion succeeds) should be posponed until caller actually COMMITS. 3) On EXIT, fork-2 deasserts semaphore. 4) in the simplest case, the semaphore can be defined on the trigger function name itself, and consequently, the 'label' for the COMMIT wouldn't be necesary?
Hmm, I've got a feeling its something like "I don't feel like coding it in application, so it would be better if community changed the backend to do it". :) However what you propose i 1,2,3,4 points is somewhat similar to already existing 2PC (2-phase commit), which PostgreSQL implements. Probably not what you want, but should be valuable to know, I guess. And as for COMMIT; within function... Not possible, not without breaking awful lot of things. Think about a tranasction as: BEGIN; -- commands, like INSERTs, ended with commit; -- user given triggers fired after user issued COMMIT; -- Referential Integrity triggers and what not COMMIT; -- actual commit performed by DB. If your trigger would call COMMIT, the referential integrity triggers would not have anything to do -- the commit would be already done. No referential integrity, you might as well use MyISAM then. ;) So... let's assume the "commit" whould not actually commit, but rather start another backend and do the work [1]. The problem is that newly started backed would not see the work until the old backend actually COMMIT; The idea of commit within a procedure might be interesting, but from the perspective of very-long-runing queries which update whole a lot of rows, but that's another issue. Regards, Dawid [1]: If you really insist on doing it this way, of course you may! Here is a fishing rod: write a trigger in PL/perlU, which will fork(); The newly started child will use DBI to connect to database, and issue your query, and then call exit(0) to be sure you don't return to backend. You might want to call exec() with a pre-prepared script doing above work.