Hi,
I have two tables
s {
id bigint NOT NULL PRIMARY KEY,
...
}
sp {
id bigint PRIMARY KEY,
sid bigint REFERENCES s (id),
i numeric,
m numeric
...
}
I have for each entry in [s] on average around 120 entries in [sp]. And
that table has become the largest table in my database (8.81565*10^09
entries).
Data in [sp] are never changed. I can probably reduce the size by
changing datatypes from numeric to float but I was wondering if it would
be more efficient - primarily in terms of storage - to change the
structure to have two arrays in [s]. E.g.
s {
id bigint NOT NULL PRIMARY KEY,
i numeric[],
m numeric[],
...
}
I can probably reduce the size by changing datatypes from numeric to
float/double. so final table would look like this:
s {
id bigint NOT NULL PRIMARY KEY,
i float[],
m double[],
...
}
I haven't really found anything yet how much space (e.g. how many bytes)
an array will use compared to a table row in postgresql.
Thanks
Lutz
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance