On Tue, Feb 4, 2014 at 2:59 PM, Rob Sargent <robjsargent@xxxxxxxxx> wrote: > On 02/04/2014 01:52 PM, AlexK wrote: > > Every row of my table has a double[] array of approximately 30K numbers. I > have ran a few tests, and so far everything looks good. > > I am not pushing the limits here, right? It should be perfectly fine to > store arrays of 30k double numbers, correct? > > What sorts of tests and what sorts of results? > Each record has something like 30000*16 + 30000*(per cell overhead, which > could be zero) but that is definitely spilling over to toast. Have you done > any large scale deletes? My take: Depends on your definition of 'fine'. your single datum will be pushing 100's of k which have to be dealt with in total if you want to read or write any single element basically. This works out well if your application always reads and writes the entire array as a block (so that it behaves as a single complete structure) and poorly for any other use case. In particular, if you tend to update one by one random elements in the array this approach will tend to fall over. also, that's bet pessimal on the size estimate: it's 30000 * 8 (the size of float8). any solution storing arrays is going to be much more compact than value per row since you amortize MVCC tracking across all the elements (notwithstanding the flexibility you give up to do that). point being: however many blocks the array takes up toasted, it will take up a lot more with standard records. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general