Hi all,
I am doing following:
CREATE TABLE "change" (
transaction_id uuid NOT NULL,
id int4 NOT NULL,
change_type varchar NOT NULL,
object_type varchar NOT NULL,
object_content jsonb NOT NULL,
category_id uuid NOT NULL,
CONSTRAINT change_pkey PRIMARY KEY (transaction_id, id)
) partition by list(transaction_id);
create table change_default
partition of "change" default;
insert into "change"
select * from old_change; -- 17437300 rows
transaction_id uuid NOT NULL,
id int4 NOT NULL,
change_type varchar NOT NULL,
object_type varchar NOT NULL,
object_content jsonb NOT NULL,
category_id uuid NOT NULL,
CONSTRAINT change_pkey PRIMARY KEY (transaction_id, id)
) partition by list(transaction_id);
create table change_default
partition of "change" default;
insert into "change"
select * from old_change; -- 17437300 rows
create table change_bf6840c7_3e7b_4100_b0e4_f5844fb7635d
partition of "change"
for values in ('bf6840c7-3e7b-4100-b0e4-f5844fb7635d');
partition of "change"
for values in ('bf6840c7-3e7b-4100-b0e4-f5844fb7635d');
Adding this last partition takes around 19 seconds. I understand that postgres has to check that 'bf6840c7-3e7b-4100-b0e4-f5844fb7635d' is not present in the default partition, but it can use the primary key for that and it shouldn't take that long, right? This new table is currently not being used, so it can't be table lock, so the only reasonable conclusion is that it does full table scan. Can that be optimized to just do primary key lookup?
Server:
PostgreSQL 11.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit
8 cores, 32 GiB RAM, 1TB SSD
Thanks in advance!
Sasa