Hi,
I'd like to implement a system to prevent a PostgreSQL database from being overloaded by delaying queries when the database is already highly loaded. I.e. the the sum of the execution costs of queries currently in the database is already near a certain threshold and executing the "next" query would cause the execution costs to pass this threshold.
Limiting the number of queries concurrently in the database to a fixed number n is out of question since - in my opinion - n simple
SELECT c FROM t WHERE c="..."
would generally produce a much lower workload than n complex queries. So, the goal is some more dynamic approach.
But my problem is to measure the execution costs of a query. My first thought was to use the estimates of the optimizer but these estimates only give the time needed to execute the query.
I know that the term "execution costs" is somewhat imprecise. Ideally, the value for the execution costs is a value that "merges" the I/O and the CPU usage used by the query (to be more precise: estimates about the I/O and CPU usage for the query). I've read the developer manuals but I didn't find any information on this. Does PostgreSQL offer information on the additional workload (execution costs) caused by a query? In case it does not: Does anybody have an idea how I get an estimate for the execution costs before executing a query?
Thanks in advance
Stefan
---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your message can get through to the mailing list cleanly