2007/10/4, Jorge Godoy <jgodoy@xxxxxxxxx>: > On Thursday 04 October 2007 06:20:19 Pavel Stehule wrote: > > > > I did good experience with 2 variant. PostgreSQL needs 24bytes for > > head of every row, so isn't too much efective store one field to one > > row. You can simply do transformation between array and table now. > > But then you'll make all SQL operations complex, you will have problems using > aggregators, etc. For example, with a normalized design one can query the > average value of a specific serie using simple commands and given the use of > indices this could be highly optimized. Now, using an array, he'll be doing > a seqscan on every row because he needs to find if there was a value for the > given series, then he'll need extracting those values and finally calculating > the average (I know you can select an element of the array, but it won't be > easy on the planner or the loop to calculate the average because they'll need > to do all that and on every row). > > I'd use the same solution that he was going to: normalized table including a > timestamp (with TZ because of daylight saving times...), a column with a FK > to a series table and the value itself. Index the two first columns (if > you're searching using the value as a parameter, then index it as well) and > this would be the basis of my design for this specific condition. > > Having good statistics and tuning autovacuum will also help a lot on handling > new inserts and deletes. > It's depend on work. Somewhere normalised solution can be better, somewhere not. But I belive, if you have lot of timeseries, than arrays is better. But I repeat, it's depend on task. Pavel ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings