On Fri, Nov 28, 2008 at 3:48 PM, Alvaro Herrera<alvherre@xxxxxxxxxxxxxxxxx> wrote:> William Temperley escribió:>> So a 216 billion row table is probably out of the question. I was>> considering storing the 500 floats as bytea.>> What about a float array, float[]? I guess that would be the obvious choice... Just a lot of storagespace reqired I imagine. On Fri, Nov 28, 2008 at 4:03 PM, Grzegorz Jaśkiewicz <gryzman@xxxxxxxxx> wrote:>>> you seriously don't want to use bytea to store anything, especially if the> datatype matching exists in db of choice.> also, consider partitioning it :)>> Try to follow rules of normalization, as with that sort of data - less> storage space used, the better :) Any more normalized and I'd have 216 billion rows! Add an index andI'd have - well, a far bigger table than 432 million rows eachcontaining a float array - I think? Really I'm worried about reducing storage space and network overhead- therefore a nicely compressed chunk of binary would be perfect forthe 500 values - wouldn't it? Will -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general