Am 24.03.2024 um 16:44 schrieb Andreas Kretschmer:
postgres=# create table bla(i int null primary key);
CREATE TABLE
postgres=# \d bla
Table "public.bla"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
i | integer | | not null |
Indexes:
"bla_pkey" PRIMARY KEY, btree (i)
postgres=# drop table bla;
DROP TABLE
postgres=# create table bla(i int not null primary key);
CREATE TABLE
postgres=# \d bla
Table "public.bla"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
i | integer | | not null |
Indexes:
"bla_pkey" PRIMARY KEY, btree (i)
postgres=#
as you can see, there is no difference. the PK-Constraint is the
important thing here.
This describes the END state perfectly. But while creating the table,
that is the question.
I am thinking along the lines that a table is being created by "first"
(1) the columns in their default state. That is, Nullable would be true.
And after that (2), all the constraints get created. Because the not
null constraint is not present in the column definition, there is no
change. After that (3), the primary gets created, requiring an
additional not null constraint. Assuming such a creation would lead to
an error when one already exists, I suppose there is a check on the
presence for the constraint.
If (2) and (3) is swapped, then in the step creating the not null
constraint, one had to go through ALL the column definitions to retrieve
on which one such a constraint is defined. At this point, one also could
check whether the nullability of a column that has already been created
is the one as defined, being explicitly using "null"/"not null" or the
default.