On Wed, 4 Mar 2015 12:36:36 -0300 María Griensu <mdovale.griensu@xxxxxxxxx> wrote: > Thanks, I mean, how heavy it is in kB. If you mean how much overhead is involved in storing the data, that's a bit complicated. First off, how the data is stored depends on the size of it, and what other fields there are in the table and their size as well. This is because the TOAST code makes decisions on how to store things on the fly based on various rules. Read up: http://www.postgresql.org/docs/9.4/static/storage-toast.html There are a number of factors. If the data is large enough for the TOAST code to kick in, the server may decide to compress the data, whether that actually helps depends on the nature of the data ... a zip file or png isn't going to get any smaller, for example. >From there, if the data is large enough to trigger out-of-line storage, the data will be broken down into chunks and stored in a toast table, this increases the overhead because each row in the toast table will have it's own overhead, and the number of rows required in the toast table depends on the size of the data, which in-turn depends on how well the data compressed ... So, your answer is: Take _your_ data and store a bunch of it, then check the resultant size of the tables on disk vs. the actual size of the data. That's really the only way to know since the actual efficiency of data storage depends a lot on the data itself. -- Bill Moran -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general