On Mon, 2023-09-04 at 18:42 +0200, Lorusso Domenico wrote: > I prepared a function that build a set of records of type (myComplexType). > > That works, but I've to use this record set in different situations. > > One of them require to scan multiple times the function results: > 1. comparing with another table (myProducteAttributeTable) to determine if I need to delete old record > 2. check if the recordset contains attribute never registered before > 3. in this case insert in appropriate table (myAttribute), retrieve attribute uid and type > 4. finally update first table (myProducteAttributeTable) with new information coming from the recordset (with a merge) > > So, which is the best approach? > > * Store result in an array and use in each subsequent query the unnest (with a complex record type?) > * Create temporary table on commit delete, with the same structure? But in this case I can't declare a > cursor for the table, and I can't use update where current of (for point 3) > * Create normal table and truncate as part of execution, but what happens in case of parallel execution? I'd say it depends. If the set is rather smallish, arrays may be the best way. If the set is large (or the rows themselves are large), and the function is not called all the time, I'd probably go with a temporary table. If the set is large and the function called all the time, I'd try to use a persistent table to avoid catalog table bloat. If several function invocations need to store their record set in the same table, you could make the backend process ID part of the primary key. Yours, Laurenz Albe