Kynn Jones wrote:
I have a C program that reads a large binary file, and uses the read
information plus some user-supplied arguments to generate an in-memory
data structure that is used during the remainder of the program's
execution. I would like to adapt this code so that it gets the
original binary data from a Pg database rather than a file.
One very nice feature of the original scheme is that the reading of
the original file was done piecemeal, so that the full content of the
file (which is about 0.2GB) was never in memory all at once, which
kept the program's memory footprint nice and small.
Is there any way to replicate this small memory footprint if the
program reads the binary data from a Pg DB instead of from a file?
is this binary data in any way record or table structured such that it
could be stored as multiple rows and perrhaps fields? if not, why
would you want to put a 200MB blob of amorphous data into a relational
database?
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general