On Tue, Sep 14, 2021 at 5:15 PM Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > > Kristjan Mustkivi <sonicmonkey@xxxxxxxxx> writes: > > -> Index Scan Backward using player_balance_history_idx2 on > > mytable pbh (cost=0.70..21639.94 rows=3885 width=66) (actual > > time=5934.153..5934.153 rows=1 loops=1) > > Index Cond: ((cage_code = $3) AND (cage_player_id = > > $2) AND (modified_time < $5)) > > Filter: (((product_code)::text = ($1)::text) AND > > ((balance_type)::text = ($4)::text)) > > Rows Removed by Filter: 95589 > > Buffers: shared hit=7623 read=18217 > > So indeed, the core issue is that that filter condition is very selective, > and applying it after the index scan is expensive. Perhaps it would help > to create an index that includes those columns along with cage_code and > cage_player_id. (It's not clear whether to bother with modified_time in > this specialized index, but if you do include it, it needs to be the last > column since you're putting a non-equality condition on it.) > > regards, tom lane But the Primary Key is defined as btree (cage_code, cage_player_id, product_code, balance_type, version) so this should be exactly that (apart from the extra "version" column). And the majority of the query plans are using the PK with only a small number of cases going for the IDX2 that is btree (cage_code, cage_player_id, modified_time). So I am wondering how to make them not do that. But perhaps the index bloat is indeed playing a part here as both the PK and IDX2 have ~50% bloat ratio. I will try REINDEX-ing the table although finding a good window to do it might require some time. Best regards, Kristjan