In that case you could use a functional unique index with a digest to teat nulls as not distinct instead; although I do prefer NULL to be distinct. To me this indicates that the schema is not fully normalized and/or ported from MS SQL server. I’ve used this approach when dealing with organic schemas that had simular logic and requirements. x_idx1 — is to fulfill queries and traditional unique values. (Better to use a constraint instead for correctness/readability) x_idx2 — handles uniqueness when one of values is null. (Must be an index as constraints do not support functional indexes) prod=> \d x Table "rui.x" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- a | integer | | | b | integer | | | c | integer | | | Indexes: "x_idx1" UNIQUE, btree (a, b, c) "x_idx2" UNIQUE, btree (pgcrypto.digest((COALESCE(a::text, '<null>'::text) || COALESCE(b::text, '<null>'::text)) || COALESCE(c::text, '<null>'::text), 'sha256'::text)) WHERE a IS NULL OR b IS NULL OR c IS NULL prod=> insert into x values (1, 2, null); INSERT 0 1 Time: 1.741 ms prod=> insert into x values (1, 2, null); ERROR: duplicate key value violates unique constraint "x_idx2" DETAIL: Key (pgcrypto.digest((COALESCE(a::text, '<null>'::text) || COALESCE(b::text, '<null>'::text)) || COALESCE(c::text, '<null>'::text), 'sha256'::text))=(\x7cce718aefddfbd5db7925f15b0ab319d7f06b4aeae096a1542f8d1adeef36be) already exists. Time: 0.843 ms prod=> insert into x values (2, 2, 2); INSERT 0 1 Time: 2.451 ms prod=> insert into x values (2, 2, 2); ERROR: duplicate key value violates unique constraint "x_idx1" DETAIL: Key (a, b, c)=(2, 2, 2) already exists. Time: 0.698 ms |