On Mon, Aug 26, 2019 at 4:26 AM Barbu Paul - Gheorghe <barbu.paul.gheorghe@xxxxxxxxx> wrote:
On Sun, Aug 25, 2019 at 5:51 PM Jeff Janes <jeff.janes@xxxxxxxxx> wrote:
>
> Yes, it certainly looks like it is due to cold caches. But you say it is slow at first, and then say it varies greatly during a run. Is being slow at first the only way it varies greatly, or is there large variation even beyond that?
There is a great variation in run times (hundreds of ms to several
seconds) even beyond the start of the server.
The query runs several times with a different device_id, object_id and
another list of attribute_ids and it varies from one another.
If you run the exact same query (with the same parameters) once the cache is hot, is the performance than pretty consistent within a given parameterization? Or is still variable even within one parameterization.
If they are consistent, could you capture a fast parameterizaton and a slow parameterization and show then and the plans or them?
> You can use pg_rewarm to overcome the cold cache issue when you first start up the server.
I cannot find anything related to pg_rewarm other than some dead ends
from 2013 from which I gather it only works on Linux.
Anyway, I have problems even beyond the start of the database, it's
just easier to reproduce the problem at the start, otherwise I have to
leave the application running for a while (to invalidate part of the
cache I think).
Sorry, should have been pg_prewarm, not pg_rewarm. Unfortunately, you probably have two different problems. Reproducing it one way is unlikely to help you solve the other one.
> If you query only on "results" with only the conditions that apply to "results", what is the expected number of rows, and what is the actual number of rows?
Explain for the query on results only: https://explain.depesz.com/s/Csau
EXPLAIN (ANALYZE,BUFFERS)
SELECT DISTINCT ON (results.attribute_id) results.timestamp,
results.data FROM results
WHERE
results.data <> '<NullData/>'
AND results.data IS NOT NULL
AND results.object_id = 1955
AND results.attribute_id IN (4, 5) -- possibly a longer list here
AND results.data_access_result = 'SUCCESS'
ORDER BY results.attribute_id, results.timestamp DESC
LIMIT 2 -- limit by the length of the attributes list
Limit (cost=166793.28..167335.52 rows=2 width=54) (actual
time=134783.510..134816.941 rows=2 loops=1)
Buffers: shared hit=19086 read=46836, temp read=1522 written=3311
-> Unique (cost=166793.28..168420.01 rows=6 width=54) (actual
time=134783.507..134816.850 rows=2 loops=1)
Buffers: shared hit=19086 read=46836, temp read=1522 written=3311
-> Sort (cost=166793.28..167606.64 rows=325346 width=54)
(actual time=134783.505..134802.602 rows=205380 loops=1)
Sort Key: attribute_id, "timestamp" DESC
Do you have an index on (attribute_id, "timestamp" DESC)? That might really help if it can step through the rows already sorted, filter out the ones that need filtering out (building the partial index might help here), hit the other two tables for each of those rows using a nested loop, and stop after 2 rows which meet those conditions. The problem is if you have to step through an enormous number for rows before finding 2 of them with device_id=97.
So maybe I should de-normalize and place the device_id column into the
"results" table and add it to the index in your suggestion above?
Yes, if nothing else works, that should. How hard would it be to maintain that column in the correct state?
Cheers,
Jeff