Hi,
On my production environment (PostgreSQL 13.3), one of my queries runs very slow, about 2 minutes.
I noticed that it does not use an execution plan that I anticapited it would.
The query is :
SELECT t.*
FROM test t
WHERE t."existe" IS true
and t.json_data" @> '{"book":{"title":"In Search of Lost Time"}}'
ORDER BY t."id" DESC
LIMIT 100 OFFSET 0
I know PostgreSQL is not very good at performing well with pagination and offsets but all my queries must end with "LIMIT 100 OFFSET 0", "LIMIT 100 OFFSET 1", ...
If I display actual Execution Plan, I get this :
Limit (cost=0.43..1164.55 rows=100 width=632) (actual time=7884.056..121297.756 rows=1 loops=1)
Buffers: shared hit=5311835 read=585741 dirtied=32
-> Index Scan Backward using test_pk on test (cost=0.43..141104.29 rows=12121 width=632) (actual time=7884.053..121297.734 rows=1 loops=1)
Filter: ((existe IS TRUE) AND (json_data @> '{"book": {"title": "In Search of Lost Time"}}'::jsonb))
Rows Removed by Filter: 1215681
Buffers: shared hit=5311835 read=585741 dirtied=32
Planning:
Buffers: shared hit=1
Planning Time: 0.283 ms
Execution Time: 121297.878 ms
The query runs very slow from limit 1 to 1147.
If I change limit value to 1148, this query runs quite fast ( 0.190 ms) with a nice execution plan :
SELECT t.*
FROM test t
WHERE t."existe" IS true
and t.json_data" @> '{"book":{"title":"In Search of Lost Time"}}'
ORDER BY t."id" DESC
LIMIT 1148 OFFSET 0
Limit (cost=13220.53..13223.40 rows=1148 width=632) (actual time=0.138..0.140 rows=1 loops=1)
Buffers: shared hit=17
-> Sort (cost=13220.53..13250.84 rows=12121 width=632) (actual time=0.137..0.138 rows=1 loops=1)
Sort Key: id DESC
Sort Method: quicksort Memory: 27kB
Buffers: shared hit=17
-> Bitmap Heap Scan on test (cost=119.73..12543.88 rows=12121 width=632) (actual time=0.125..0.127 rows=1 loops=1)
Recheck Cond: (json_data @> '{"book": {"title": "In Search of Lost Time"}}'::jsonb)
Filter: (existe IS TRUE)
Heap Blocks: exact=1
Buffers: shared hit=17
-> Bitmap Index Scan on test_json_data_idx (cost=0.00..116.70 rows=12187 width=0) (actual time=0.112..0.113 rows=1 loops=1)
Index Cond: (json_data @> '{"book": {"title": "In Search of Lost Time"}}'::jsonb)
Buffers: shared hit=16
Planning:
Buffers: shared hit=1
Planning Time: 0.296 ms
Execution Time: 0.190 ms
Would you have any suggestions why Postgres chooses a so bad query plan ?
Server :
----------------------------------------------------------------------
CPU Model : AMD EPYC 7281 16-Core Processor
CPU Cores : 4
CPU Frequency : 2096.060 MHz
CPU Cache : 512 KB
Total Disk : 888.1 GB (473.0 GB Used)
Total Mem : 11973 MB (4922 MB Used)
Total Swap : 0 MB (0 MB Used)
OS : Debian GNU/Linux 10
Arch : x86_64 (64 Bit)
Kernel : 5.10.28
Virtualization : Dedicated
----------------------------------------------------------------------
I/O Speed(1st run) : 132 MB/s
I/O Speed(2nd run) : 204 MB/s
I/O Speed(3rd run) : 197 MB/s
Average I/O speed : 177.7 MB/s
Postgresql.conf :
max_connections = 100
shared_buffers = 3840MB
huge_pages = on
work_mem = 9830kB
maintenance_work_mem = 960MB
effective_io_concurrency = 200
max_worker_processes = 3
max_parallel_maintenance_workers = 2
max_parallel_workers_per_gather = 2
max_parallel_workers = 3
max_wal_size = 4GB
min_wal_size = 1GB
checkpoint_completion_target = 0.9
effective_cache_size = 11520MB
default_statistics_target = 100
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 10000
pg_stat_statements.track = all
Table test : I have just over 1.2 million records on this table
CREATE TABLE test (
"source" varchar NOT NULL,
existe bool NULL,
json_data jsonb NULL
row_updated timestamp NOT NULL DEFAULT clock_timestamp(),
row_inserted timestamp NOT NULL DEFAULT clock_timestamp(),
id uuid NOT NULL,
CONSTRAINT test_pk PRIMARY KEY (id)
);
CREATE INDEX test_existe_idx ON test USING btree (existe);
CREATE INDEX test_id_idx ON test USING btree (id);
CREATE INDEX test_json_datae_idx ON test USING gin (json_data jsonb_path_ops);
CREATE INDEX test_row_inserted_idx ON test USING btree (row_inserted);
CREATE INDEX test_row_updated_idx ON production.test USING btree (row_updated);
CREATE INDEX test_source_idx ON production.test USING btree (source);
select * from pg_stat_all_tables where relname = 'test' :
relid|schemaname|relname|seq_scan|seq_tup_read|idx_scan|idx_tup_fetch|n_tup_ins|n_tup_upd|n_tup_del
16692|dev |test |1816 |724038305 |31413 |36863713 |1215682 |23127 |0
n_tup_hot_upd|n_live_tup|n_dead_tup|n_mod_since_analyze|n_ins_since_vacuum|last_vacuum
0 |1219730 |30 |121 |91 |2021-07-07 14:43:13
last_autovacuum |last_analyze |last_autoanalyze |vacuum_count|autovacuum_count|analyze_count|autoanalyze_count
2021-06-10 09:29:54|2021-07-07 14:43:52|2021-06-10 09:31:32|2 |1 |1 |1