On Sat, Aug 27, 2016 at 18:33 GMT+03:00, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> Partitioning the Feature and Point tables on measurement_time (or
> measurement_start_time,
> you are not consistent on what it is called) might be helpful. However,
> measurement_time does not exist in those tables, so you would first have to
> de-normalize by introducing it into them.
>
> More likely to be helpful would be precomputing the aggregates and storing
> them in a materialized view (not available in 9.2). Also, more RAM and
> better hard-drives can't hurt.
Thanks a lot for help and all suggestions. Before this I tried to partition by measurement_id (Feature table) and by result_feature_id (Point table) but the performance was worse than without partitioning. Using measurement_time in partitioning might be a better idea (measurement_start_time was meant to be measurement_time).
I think I will update to newer version, use better hardware and try materialized views for better performance.
Best Regards,
Tommi Kaksonen
2016-08-27 21:33 GMT+03:00 Jeff Janes <jeff.janes@xxxxxxxxx>:
On Fri, Aug 26, 2016 at 6:17 AM, Tommi K <t2nn2t@xxxxxxxxx> wrote:Hello,thanks for the response. I did not get the response to my email even though I am subscribed to the pgsql-performance mail list. Let's hope that I get the next one :)Increasing work_mem did not have great impact on the performance. But I will try to update the PostgreSQL version to see if it speeds up things.However is there way to keep query time constant as the database size grows.Not likely. If the number of rows you are aggregating grows, it will take more work to do those aggregations.If the number of rows being aggregated doesn't grow, because all the growth occurs outside of the measurement_time range, even then the new data will still make it harder to keep the stuff you want cached in memory. If you really want more-constant query time, you could approach that by giving the machine as little RAM as possible. This works not by making the large database case faster, but by making the small database case slower. That usually is not what people want.Should I use partitioning or partial indexes?Partitioning the Feature and Point tables on measurement_time (or measurement_start_time, you are not consistent on what it is called) might be helpful. However, measurement_time does not exist in those tables, so you would first have to de-normalize by introducing it into them.More likely to be helpful would be precomputing the aggregates and storing them in a materialized view (not available in 9.2). Also, more RAM and better hard-drives can't hurt.Cheers,Jeff