Search Postgresql Archives

Need recommendation on PARALLEL INDEX SCAN and PARALLEL APPEND

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Team,

Please have a look on the below problem statement and suggest us if there are any ways to make the planner pick PARALLEL INDEX SCAN and PARALLEL APPEND

Problem Statement :

We have a partitioned table with a partition key column (crdt --> timestamp). A SELECT query on this table that does not invoke the partition key column undergoes INDEX SCAN on all the partitions and it is being summed up in an APPEND node

Our requirement is to make the planner pick,

--PARALLEL INDEX SCAN instead of INDEX SCAN

--PARALLEL APPEND instead of APPEND


PostgreSQL version --> 13.4


Table Structure :



Partitioned table "public.pay" Column | Type | Collation | Nullable | Default -------------------------------+--------------------------+-----------+----------+------------- id | bigint | | not null | pri | character varying(256) | | | prf | character varying(128) | | | pi | character varying(256) | | | pas | character varying(128) | | | s | payment_state | | not null | st | jsonb | | not null | rct | character varying(32) | | | prf | jsonb | | | pa | jsonb | | | always | jsonb | | | '{}'::jsonb pr | jsonb | | | pe | jsonb | | | cda | jsonb | | | tr | jsonb | | | ar | jsonb | | | crq | jsonb | | | cr | jsonb | | | prt | jsonb | | | rrq | jsonb | | | rtrt | jsonb | | | rrt | jsonb | | | tc | character varying(32) | | | crdt | timestamp with time zone | | not null | now() isfin | boolean | | | ifi | bigint | | | rid | character varying(256) | | | pce | text | | | cce | text | | | pp | jsonb | | | Partition key: RANGE (crdt) Indexes: "pay_pkey" PRIMARY KEY, btree (id, crdt) "pay_businessid_storeid_crdt" btree ((pe ->> 'businessID'::text), (pe ->> 'storeID'::text), crdt) WHERE (pe ->> 'businessID'::text) IS NOT NULL AND (pe ->> 'storeID'::text) IS NOT NULL "pay_crdt_index" btree (crdt) "pay_ifi_idx" btree (ifi) "pay_index_isfin" btree (isfin) "pay_pi_pas_key" UNIQUE CONSTRAINT, btree (pi, pas, crdt) "pay_pri_prf_key" UNIQUE CONSTRAINT, btree (pri, prf, crdt) "pay_rid_crdt" btree (rid, crdt) WHERE rid IS NOT NULL AND crdt >= '2020-04-01 00:00:00+00'::timestamp with time zone "pay_tc_index" btree (tc, crdt) "pay_user_id_pe_index" btree ((pe ->> 'userID'::text)) WHERE (pe ->> 'userID'::text) IS NOT NULL "pay_user_id_pr_index" btree ((pr ->> 'userID'::text)) WHERE (pr ->> 'userID'::text) IS NOT NULL Triggers: pay_bucardo_delta AFTER INSERT OR DELETE OR UPDATE ON pay FOR EACH ROW EXECUTE FUNCTION bucardo.delta_public_pay() pay_bucardo_note_trunc_sync_payment_pay AFTER TRUNCATE ON pay FOR EACH STATEMENT EXECUTE FUNCTION bucardo.bucardo_note_truncation('sync_payment_pay') pay_payment_completion_trigger_after_upsert AFTER INSERT OR UPDATE ON pay FOR EACH ROW EXECUTE FUNCTION handle_payment_completion() Triggers firing always: pay_trg_change_capture_pay AFTER INSERT OR UPDATE ON pay FOR EACH ROW EXECUTE FUNCTION fun_change_capture_pay() Number of partitions: 4 (Use \d+ to list them.)

Partitions :

 

            p_p2021_09 FOR VALUES FROM ('2021-09-01 00:00:00+00') TO ('2021-10-01 00:00:00+00'),

            p_p2021_10 FOR VALUES FROM ('2021-10-01 00:00:00+00') TO ('2021-11-01 00:00:00+00'),

            p_p2021_11 FOR VALUES FROM ('2021-11-01 00:00:00+00') TO ('2021-12-01 00:00:00+00'),

            p_default DEFAULT


Table_size :


Name

Type

Size

pay

partitioned table

0 bytes

p_default

table

8192 bytes

p_p2021_09

table

358 MB

p_p2021_10

table

370 MB

p_p2021_11

table

358 MB


Note: The table size will be in TB's in the actual scenario

Query :


SELECT id, pri, prf, pi, pas, s, st, a, rct, pr, pa, pr, pe, cda, crdt, tr, ar, crq, cr, prt, tc, ifi, isfin, rrt, rrq, rtrt, rid, pce, cce, pp

FROM public.pay WHERE id = 3011852315482470422;


Query Plan :


pay=# EXPLAIN (ANALYZE,BUFFERS) SELECT id, pri, prf, pi, pas, s, 

st, a, rct, pr, pa, pr, pe, cda, crdt, 

tr, ar, crq, cr, prt, tc, ifi, isfin, rrt, 

rrq, rtrt, rid, pce, cce, pp

FROM public.pay WHERE id = 3011852315482470422;


                                                                          QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------------

 Append  (cost=0.29..33.09 rows=4 width=1931) (actual time=0.148..0.211 rows=0 loops=1)

   Buffers: shared hit=8

   ->  Index Scan using pay_p2021_09_pkey on pay_p2021_09 pay_1  (cost=0.29..8.30 rows=1 width=1931) (actual time=0.015..0.022 rows=0 loops=1)

         Index Cond: (id = '3011852315482470422'::bigint)

         Buffers: shared hit=2

   ->  Index Scan using pay_p2021_10_pkey on pay_p2021_10 pay_2  (cost=0.29..8.30 rows=1 width=1931) (actual time=0.012..0.019 rows=0 loops=1)

         Index Cond: (id = '3011852315482470422'::bigint)

         Buffers: shared hit=2

   ->  Index Scan using pay_p2021_11_pkey on pay_p2021_11 pay_3  (cost=0.29..8.30 rows=1 width=1931) (actual time=0.012..0.019 rows=0 loops=1)

         Index Cond: (id = '3011852315482470422'::bigint)

         Buffers: shared hit=2

   ->  Index Scan using pay_default_pkey on pay_default pay_4  (cost=0.14..8.16 rows=1 width=1931) (actual time=0.010..0.017 rows=0 loops=1)

         Index Cond: (id = '3011852315482470422'::bigint)

         Buffers: shared hit=2

 Planning:

   Buffers: shared hit=292

 Planning Time: 10.351 ms

 Execution Time: 0.283 ms


Below are the workarounds that we have tried to get the desired results,


1. We have tried fine-tuning the below parameters with all possible values to get the expected results but got no luck,


Parametersettingunit
enable_parallel_appendon
enable_parallel_hashon
force_parallel_modeoff
max_parallel_maintenance_workers2
max_parallel_workers8
max_parallel_workers_per_gather2
min_parallel_index_scan_size648kB
min_parallel_table_scan_size10248kB
parallel_leader_participationon
parallel_setup_cost1000
parallel_tuple_cost0.1
effective_cache_size4GB
shared_buffers128MB
work_mem4MB

2. Performed VACUUM ANALYZE on the partitioned tables

Kindly help us to improve the mentioned query performance by picking up PARALLEL INDEX SCAN with PARALLEL APPEND node.

Thanks & Regards
Vivekk
PostgreSQL DBA

Disclaimer : This email (including any enclosed documents) is only intended for the person(s) to whom it is addressed and may have confidential information. Unless stated to the contrary, any opinions or comments are personal to the writer and do not represent the official view of the company. If you have received this email in error, please notify the sender immediately by reply email. Also destroy all the electronic copies by deleting the email  irretrievably from your system and paper copies, if any, by shredding the same. Please do not copy this email, use it for any purposes, or disclose its contents to any other person. Any person communicating with the company by email will be deemed to have accepted the risks associated with sending information by email being interception, amendment, and loss as well as the consequences of incomplete or late delivery. Information contained in this email and any attachments may be privileged or confidential and intended for the exclusive use of the original recipient. Mistransmission is not intended to waive confidentiality or privilege.  

To learn more about how we collect and process your private and confidential information, please view our Privacy Policy. If you cannot access the link, please notify us at security@xxxxxxxxx and we will send the contents to you. By communicating with our company you acknowledge that you have read, understood and have consented (wherever applicable), to the forgoing and our company’s general disclaimers.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux