On Thu, 15 Feb 2024 at 22:40, Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote:
On 2/15/24 09:00, Greg Sabino Mullane wrote:
> On Thu, Feb 15, 2024 at 11:43 AM Adrian Klaver
> <adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>> wrote:
>
> That is a mixed bag:
>
>
> Ha! Good point. Our contrived example table does suffer from that, so
> perhaps the test should be:
>
> create table int_test(c1 int, c2 int);
Alright now I see:
test=# create table int_test(c1 int, c2 int);
CREATE TABLE
test=# select pg_relation_filenode('int_test');
pg_relation_filenode
----------------------
70021
(1 row)
test=# insert into int_test select a, a+1 from generate_series(1,
10000, 1) as t(a);
INSERT 0 10000
test=# select pg_relation_size('int_test');
pg_relation_size
------------------
368640
(1 row)
test=# alter table int_test alter column c2 set data type bigint;
ALTER TABLE
test=# select pg_relation_filenode('int_test');
pg_relation_filenode
----------------------
70024
(1 row)
test=# select pg_relation_size('int_test');
pg_relation_size
------------------
450560
(1 row)
Thank you.
Did a similar test as below using DB fiddle. Same results for fixed length data type i.e the size is getting increased. However for variable length types (like numeric) , it remains the same, so it must be just metadata change and thus should be quick enough even for a big table.
So one learning for me, i.e. one of the downside of fixed length data type is, with fixed length data types any future changes to it , will be a full table rewrite. And thus this sort of change for big tables will be a nightmare.
Regards
Veem