On 2023-07-26 15:46:16 +0800, gzh wrote: > SET enable_seqscan TO off; [...] > -> Parallel Bitmap Heap Scan on tbl_sha (cost=92112.45..2663789.14 rows=800650 width=18) (actual time=260.540..21442.169 rows=804500 loops=3) > Recheck Cond: (ms_cd = 'MLD009'::bpchar) > Rows Removed by Index Recheck: 49 > Filter: (etrys = '00000001'::bpchar) > Rows Removed by Filter: 295500 > Heap Blocks: exact=13788 lossy=10565 > -> Bitmap Index Scan on index_search_04_mscd_cdate (cost=0.00..91632.06 rows=3402599 width=0) (actual time=249.718..249.718 rows=3300000 loops=1) > Index Cond: (ms_cd = 'MLD009'::bpchar) So now it's using index_search_04_mscd_cdate which contains only ms_cd (and - judging from the name, other fields not relevant to this query), but it still doesn't use index_search_01 which would fit the query exactly. I can understand that Postgres prefers a sequential scan over an index scan (the number of matching rows is about 10% of the total table size which is a lot), but why would it prefer a less specific index to a more specific one? Can you get Postgres to use that index at all? Find a combination of ms_cd and etrys which doesn't cover millions of rows and try that. Also try lowering random_page_cost. hp -- _ | Peter J. Holzer | Story must make more sense than reality. |_|_) | | | | | hjp@xxxxxx | -- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!"
Attachment:
signature.asc
Description: PGP signature