We have tried some recoding now, using a materialized view we could
reduce the query to a join over too tables without any functions inside
the query, for example:
explain analyse SELECT '12.11.2007 18:04:00 UTC' AS zeit,
'M' AS ganglinientyp,
zs_de,
j_ges,
j_lkw,
v_pkw,
v_lkw,
p_bel
FROM messungen_v_dat_2007_11_12 m
LEFT JOIN messwerte_mv w on w.nr = m.messpunkt
WHERE m.ganglinientyp = 'M'
AND 992 = m.minute_tag;
Nested Loop Left Join (cost=0.00..32604.48 rows=3204 width=14) (actual
time=11.991..2223.227 rows=2950 loops=1)
-> Index Scan using
messungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on
messungen_v_dat_2007_11_12 m (cost=0.00..5371.09 rows=3204 width=4)
(actual time=0.152..12.385 rows=2950 loops=1)
Index Cond: ((ganglinientyp = 'M'::bpchar) AND (992 = minute_tag))
-> Index Scan using messwerte_mv_nr_idx on messwerte_mv w
(cost=0.00..8.49 rows=1 width=18) (actual time=0.730..0.734 rows=1
loops=2950)
Index Cond: (w.nr = m.messpunkt)
Total runtime: 2234.143 ms
(6 rows)
To me this plan looks very clean and nearly optimal, BUT ~2seconds for
the nested loop can't be that good, isn't it?
The behavior of this query and the database is quite a mystery for me,
yesterday i had it running in about 100ms, today i started testing with
the same query and 2000-3000ms :(
Could this be some kind of a postgresql server/configuration problem?
This queries are very perfomance dependend, they are called a lot of
times in a comlex physical real time simulation of traffic systems.
200ms would be ok here, but >1sec is perhaps not functional.
The old version just used one big (partitioned) table without any joins,
performing this query in 10-300ms, depended on the server load.
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate