FWIW I don't think the randomness per se is the main issue. The main problem is that this results in a loop of bitmap index scans, with 20k loops. This is made worse by the block-range nature of BRIN indexes, resulting in many more heap block accesses. The table has ~80k pages, but the bitmapscan plan reads ~2559362. That can easily happen if each loop matches 100 ranges (out of 700), each having 128 pages. Making the ranges smaller should help to minimize the amount of pages read unnecessarily, but the loops are an issue. And the same range may be scanned again and again, if the range is consistent with multiple values. Interestingly enough, this is the kind of queries / plans I thought about a couple weeks ago, which made me to look at SK_SEARCHARRAY support for BRIN indexes. Imagine you did rewrite the query to something like: SELECT * FROM test_brin WHERE id IN (... lilerals ...); That would only scan the table one, reducing the number of heap pages it has to access. The drawback is that we still have to build the bitmap, and without the SK_SEARCHARRAY support we just scan the index for each element again. So better. With the SK_SEARCHARRAY patch [1] this is further optimized, and for me the query runs a bit faster than seqscan (not by much, and it depend on how the data is cached). At least with pages_per_range=1. Of course, this won't help unless the query can be rewritten like this. At least not currently, but I wonder if we could invent some sort of "pushdown" that'd derive an array of values and push it down into a parameterized path at once (instead of doing that for each value in a loop). regards [1] https://commitfest.postgresql.org/42/4187/ -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company