Search Postgresql Archives

Re: Column type modification in big tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Fri, Aug 9, 2024 at 9:19 PM Greg Sabino Mullane <htamfids@xxxxxxxxx> wrote:
On Fri, Aug 9, 2024 at 6:39 AM Lok P <loknath.73@xxxxxxxxx> wrote:
Thank you so much. Will definitely try to evaluate this approach. The Only concern I have is , as this data is moving downstream with exactly the same data type and length , so will it cause the downstream code to break while using this column in the join or filter criteria. Also I believe the optimizer won't be able to utilize this information while preparing the execution plan.

Yes, this is not as ideal as rewriting the table, but you asked for another approaches :) As to the impact of your downstream stuff, I think you have to try and see. Not clear what you mean by the optimizer, it's not going to really care about numeric(10) versus numeric(8) or varchar(20) vs varchar(2). It's possible the varchar -> numeric could cause issues, but without real-world queries and data we cannot say.
 
 Another thing , correct me if wrong, My understanding is  , if we want to run the "validate constraint" command after running this "check constraint with not valid" command, this will do a full table scan across all the partitions , but it's still beneficial as compared to updating the columns values for each rows. Correct me if I'm wrong. 

Yes, it needs to scan the entire table, but it's a lightweight lock, won't block concurrent access, will not need to detoast, and makes no table or index updates. Versus an entire table rewrite which will do heavy locking, take up tons of I/O, update all the indexes, and generate quite a lot of WAL. 


Thank you so much Greg.

Considering the option, if we are able to get large down time to get this activity done.

Some teammates suggested altering the column with "USING" Clause. I am not really able to understand the difference,  also when i tested on a simple table, it seems the "USING" clause takes more time as compared to normal ALTER. But again I don't see any way to see the progress and estimated completion time. Can you share your thoughts on this? 

ALTER TABLE foobar ALTER COLUMN mycol TYPE NUMERIC(3) USING mycol::NUMERIC(3);
VS
ALTER TABLE foobar ALTER COLUMN mycol TYPE NUMERIC(3) ;

*****
Another thing also comes to my mind whether we should just create a new partition table(say new_part_table) from scratch from the DDL of the existing table(say old_part_table) and then load the data into it using command (insert into new_part_table.. select..from old_part_table). Then create indexes and constraints etc, something as below.

Will this approach be faster/better as compared to the simple "alter table alter column approach" as above, considering we will have 4-6 hours of downtime for altering three different columns on this ~5TB table?

-- Steps

Create table exactly same as existing partition table but with the modified column types/lengths.

drop indexes ;  (Except PK and FK indexes may be..)
drop constraints;

insert into new_part_table (...) select (...) from old_part_table;

create indexes concurrently ;
create constraints; (But this table is also a child table to another partition table, so creating the foreign key may be resource consuming here too).

drop the old_part_table;
rename the new_part_table to old_part_table;
rename all the partitions;

VACUUM  old_part_table  ;
ANALYZE  old_part_table  ; 

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux