On 3/17/07, louis gonzales <gonzales@xxxxxxxxxxxxxx> wrote:
Statement-level triggers follow simple visibility rules: none of the changes made by a statement are visible to statement-level triggers that are invoked before the statement, whereas all modifications are visible to statement-level after triggers.
you are misunderstanding manual. by "change visible" it means - content of the table. not the tuples updates/modified.
The data change (insertion, update, or deletion) causing the trigger to fire is naturally not visible to SQL commands executed in a row-level before trigger, because it hasn't happened yet.
as i mentioned - you do not understand the text correctly. you have access to change data as NEW.* and OLD.* records. and it works in both "before" and "after" triggers.
However, SQL commands executed in a row-level before trigger will see the effects of data changes for rows previously processed in the same outer command. This requires caution, since the ordering of these change events is not in general predictable; a SQL command that affects multiple rows may visit the rows in any order.
statement-level trigger sees the changes *if* it calls select * from table. but it doesn't mean you have the ability to tell which record were added/modified. as you dont. know.
When a row-level after trigger is fired, all data changes made by the outer command are already complete, and are visible to the invoked trigger function.
read comment above.
2) Seeing as you have no idea - not attacking, stating fact - on the rationale behind the "insert statement-level" to create 1-to-1 table for each statement-level
seeing as you have problems with understanding simple english text - not attacking, stating fact - please check the docs, and do some tests yourself before you will state this kind of "facts".
insert, I'd say your presumption is unfounded. If you have some benchmark data, which support why/how to quantify, 50K records in a single table, all of which would have N number of associated records in another table, would out perform 50K records in a single table referencing dedicated 'small' tables, please do share.
no, i dont have such benchmarks as i think it is obvious that having 50k tables will kill any kind of performance and/or simplicity of writing queries. if you dont belive me - fine, your database, your problems. just test the "visibility" thing in triggers (especially in case of multi-row inserts and updates). best regards, depesz