Mark Zellers wrote: > One of the issues I have run into in migrating from Oracle to PostgreSQL is the difference in temporary table behavior. > I've come up with what I think is a cunning plan, but I wonder if anyone with more PostgreSQL understanding can shoot holes in it. > > Oracle defines their temporary tables statically, such that the first use in a session instantiates a private instance of the table. > That implies that you need to jump through flaming hoops in order to be able to use temporary tables in stored procedures and functions. > > My idea was to create a "prototype" table (which will never have any rows in it) and, when I need to use the temporary table, > create the temporary table such that it hides the prototype. I have a simple example that seems to work, > although before I invest more effort in using it for real work, I'd like to get a second opinion as to whether this > approach is going to fail in a more complex scenario. [...] > What I am not clear on is what the rules are as to when a function/procedure is effectively recompiled. > Is there a danger that. assuming the temporary table is created for a session that one session might see another session's data due > to the procedure having effectively compiled the temporary table into its definition? There is no danger of that, because a function is parsed whenever it is executed, and certainly different database sessions never share any "compiled version" of the function or query execution plans. What does get cached are the execution plans of SQL statements in a PL/pgSQL function, but only across different calls in the same database session. This does not seem to cause problems in your case (the plans are probably invalidated; I am not certain), but you can avoid that behavior by using dynamic SQL (EXECUTE 'SELECT ...'). > While this approach does have the disadvantage of requiring the application to define the temporary table before using it (which > could be as simple as using `CREATE TABLE AS SELECT * FROM prototype_table`), it seems simpler and potentially more performant than > the approach I found here: https://www.codeproject.com/Articles/1176045/Oracle-style-global-temporary-tables-for-PostgreSQ > > It is also in direct opposition to this post I found: https://www.cybertec-postgresql.com/en/using-temporary-tables-the-way-they-should-not-be-used/ In this case, the "trouble and pain" behavior is what you explicitly want, so don't worry. > So far, I have not found a case where, as long as I don’t read or write to the permanent table, I get the wrong results from > the above approach. It allows me to minimize the impact on my application (basically, it means that at the start of any transaction > that might need a certain temporary table, I need to manually create it. The number of places I would need to do that is relatively finite, > so I’m willing to take that hit, in exchange for not having to use dynamic SQL to refer to my temporary tables. The biggest problem I see with your approach is that dropping temporary tables causes a DELETE in the system catalogs "pg_class" and "pg_attribute", and if you do that very often, particularly the latter table tends to get bloated. Either don't drop temporary tables all the time or make sure that "pg_attribute" gets vacuumed fast enough. Other than that, I don't really see the need for keeping a permanent "blueprint" table around; all this can lead to is confusion. True, it is cute to use CREATE TABLE ... (LIKE ...), but a few lines more in your code won't kill you. Particularly since you have that only once in your code, right? Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com