On Fri, Aug 9, 2024 at 2:06 AM Greg Sabino Mullane <htamfids@xxxxxxxxx> wrote:
On Thu, Aug 8, 2024 at 2:39 PM Lok P <loknath.73@xxxxxxxxx> wrote:Can anybody suggest any other possible way here.Sure - how about not changing the column type at all?> one of the columns from varchar(20) to varchar(2)ALTER TABLE foobar ADD CONSTRAINT twocharplease CHECK (length(mycol) <= 2) NOT VALID;> one of the columns from Number(10,2) to Numeric(8,2)ALTER TABLE foobar ADD CONSTRAINT eightprecision CHECK (mycol <= 10^8) NOT VALID;> two of the columns from varchar(20) to numeric(3)This one is trickier, as we don't know the contents, nor why it is going to numeric(3) - not a terribly useful data type, but let's roll with it and assume the stuff in the varchar is a number of some sort, and that we don't allow nulls:ALTER TABLE foobar ADD CONSTRAINT onekorless CHECK (mycol::numeric(3) is not null) NOT VALID;You probably want to check on the validity of the existing rows: see the docs on VALIDATE CONSTRAINT here:
Thank you so much. Will definitely try to evaluate this approach. The Only concern I have is , as this data is moving downstream with exactly the same data type and length , so will it cause the downstream code to break while using this column in the join or filter criteria. Also I believe the optimizer won't be able to utilize this information while preparing the execution plan.
Another thing , correct me if wrong, My understanding is , if we want to run the "validate constraint" command after running this "check constraint with not valid" command, this will do a full table scan across all the partitions , but it's still beneficial as compared to updating the columns values for each rows. Correct me if I'm wrong.