I would like to store very large data into one column, which contains my own data structure and memory layout. The data will be more than 500M bytes. I just wrote like: Datum myfunc(PG_FUNCTION_ARGS){ /* here get my data, allocated ~ 500M data */ ... /* allocate more than 500M */ bytea *result = (bytea*)palloc(size); VARATT_SIZEP(result) = size; memcpy(VARDATA(result), my_data, size - VARDATA); pfree(my_data); PG_RETURN_BYTEA_P(result); } This code succeeded on Linux, while failed on Windows, saying an error of memory allocation. I guess it hit the per-process memory limitation. Anyways, I think it is not smart and unefficient, because from my understanding the postmaster copies a tuple returned by the user function. original 500M --> bytea palloc 500M -> postmaster copy 500M -> tuple on relation Being so rare case, I would like you all to give me opinions about what you would do in such case. Write the data down on own-controlled file? Is it allowed in PG? Or write down on heap relation from user function? But how? Note that you could use only one tuple. To split data into several tuples or datums is disallowed because the data chunk is local specific structure and has its own memory layout as described at first. Regards, Hitoshi Harada ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org/