On 06/13/2011 12:00 AM, Hanno Schlichting wrote:
But from what I read of Postgres, my best bet is to store data as large objects [2]. Going all the way down this means storing the binary data as 2kb chunks and adding table row overhead for each of those chunks. Using the bytea type and the toast backend [3] it seems to come down to the same: data is actually stored in 2kb chunks for a page size of 8kb.
This is probably much less of a concern than you expect. Consider that your file system almost certainly stores file data in chunks of between 512 bytes and 4kb (the block size) and performs just fine.
Given the file sizes you're working with, I'd try using `bytea' and see how you go. Put together a test or simulation that you can use to evaluate performance if you're concerned.
Maybe one day Linux systems will have a file system capable of transactional behaviour like NTFS is, so Pg could integrate with the file system for transactional file management. In the mean time, `bytea' or `lo' seem to be your best bet.
-- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general