Pavel Stehule schrieb:
2007/10/4, Jorge Godoy <jgodoy@xxxxxxxxx>:
On Thursday 04 October 2007 06:20:19 Pavel Stehule wrote:
I'd use the same solution that he was going to: normalized table including a
timestamp (with TZ because of daylight saving times...), a column with a FK
to a series table and the value itself. Index the two first columns (if
you're searching using the value as a parameter, then index it as well) and
this would be the basis of my design for this specific condition.
Having good statistics and tuning autovacuum will also help a lot on handling
new inserts and deletes.
It's depend on work. Somewhere normalised solution can be better,
somewhere not. But I belive, if you have lot of timeseries, than
arrays is better. But I repeat, it's depend on task.
Pavel
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
Thanks for your input so far. Maybe i should add a few things about what
i will do with the data. There are only a few operations that will be
done in the database:
a) retrieving a slice or the whole series
b) changing the frequency of the series
c) grouping several series (with same time frame/frequency) together in
a result set
d) calculating moving averages and other econometrics stuff :-)
I will always now which series i want (i.e. there will be no case where
i'm searching for a value within the series).
Two questions regarding the arrays: Do you know if these are really
dynamic (e.g. if i have two rows, one with an array with 12 values and
the other one with 1,000 values - will postgres pad the shorter row?)
and is there an built-in function to retrieve arrays as rows (i know
that you can build your own function for that, but i wonder whether
there is a faster native function)
Thank you very much!
Andreas
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings