On Wed, Dec 14, 2011 at 9:40 AM, Jon Nelson <jnelson+pgsql@xxxxxxxxxxx> wrote: > On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote: >> Jon Nelson <jnelson+pgsql@xxxxxxxxxxx> writes: >>> Regarding caching, I tried caching it across calls by making the >>> TupleDesc static and only initializing it once. >>> When I tried that, I got: >> >>> ERROR: number of columns (6769856) exceeds limit (1664) >> >>> I tried to find some documentation or examples that cache the >>> information, but couldn't find any. >> >> You might find reading record_in to be helpful. What it caches is not >> exactly what you need to, I think, but it shows the general principles. >> There are lots of other functions that use fn_extra to cache info, too. > > I will definitely look into those. I'm probably doing it wrong, but in > the meantime, I allocated enough space (by way of MemoryContextAlloc) > in TopMemoryContext for an AttInMetadata pointer, switched to that > memory context (just the first time through), used CreateTupleDescCopy > + TupleDescGetAttInMetadata to duplicate (in the new memory context) > the TupleDesc, and then switched back. This approach seems to have > dropped the total run time to about 54 seconds, the bulk of which is > BuildTupleFromCStrings, a rather significant improvement. > > .... > > Looking at record_in, I think I see what I could be doing better. Indeed. I revised the code to make use of fcinfo->flinfo->fn_extra for storage and fcinfo->flinfo->fn_mcxt for the MemoryContext and everything seemed to work just fine. Assuming one *starts* with a char *some_var[8], would building Datum myself be faster than using BuildTupleFromCStrings? -- Jon -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance