On 8/10/23 23:40, David Rowley wrote:
On Fri, 11 Aug 2023 at 13:54, Ron <ronljohnsonjr@xxxxxxxxx> wrote:
Wouldn't IO contention make for additive timings instead of exponential?
No, not necessarily. Imagine one query running that's doing a
parameterised nested loop join resulting in the index on the inner
side being descended several, say, million times. Let's say there's
*just* enough RAM/shared buffers so that the index pages, once the
index is scanned the first time, all the required pages are cached
which results in no I/O on subsequent index scans. Now, imagine
another similar query but with another index, let's say this index
also *just* fits in cache. Now, when these two queries run
concurrently, they each evict buffers the other one uses. Of course,
the shared buffers code is written in such a way as to try and evict
lesser used buffers first, but if they're all used about the same
amount, then this can stuff occur. The slowdown isn't linear.
But that's cache thrashing (which was OP's concern), not IO contention.
--
Born in Arizona, moved to Babylonia.