2017-02-16 14:57 GMT+13:00 Patrick B <patrickbakerbr@xxxxxxxxx>:
I've got two different scenarios:Production database server > PG 9.2Test database server > PG 9.2
- I ran one single time, in a slave server that no queries go to that server, and it took >10 seconds.
- This is the server that I'm working on. When I ran the query here for the first time, it also took >10 seconds. And it is not a LOCK as no one was/is using this database server. (using explain analyze)
- When I ran the query for the second time (using explain analyze), it took 1 second to run.
- On the third time, it took < than 1 second.
- This server I can reboot the machine/PG or stop/start Postgres Process.
- I've already done: service postgresql stop;
sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql startI've made some changes to the query and would like to get its real runtime so I can compare and keep working if I need to.The question is:How can I clear the cache, to get a real good estimation of how much the query is taking to run?P.
BTW
-> Index Only Scan Backward using ix_geo_time_end_user on geo mg (cost=0.00..7369.78 rows=24149 width=8) (actual time=0.020..0.020 rows=0 loops=1)
one of the reasons the query is too expensive... the index has 6GB.