On 2025-01-09 20:52:27 +0100, shammat@xxxxxxx wrote: > Am 09.01.25 um 20:17 schrieb veem v: > > > Out of curiosity, why NUMERIC(15,0) instead of BIGINT? > > > > It's for aligning the database column types to the data model and > > it's happening across all the upstream downstream systems. I was > > thinking if this can be made faster with the single line alter > > statement "Alter table <table_name> alter column <column_name> type > > numeric(15,0) USING <column_name>::NUMERIC(15,0);" > > Hmm, I would rather change numeric(15,0) to bigint if I had to "align" types across systems. I'm also wondering what "the data model" is. If I have numeric(15,0) in an abstract data model, that means that I expect values larger than 99,999,999,999,999 but at most 999,999,999,999,999. That seems to be oddly specific and also somewhat at odds with reality when until now there apparently haven't been any values larger than 2,147,483,647. What kind of real world value could suddenly jump by more than 5 orders of magnitude but certainly not by 7? A bigint is much less precise (more than 2,147,483,647 but not more than 9,223,372,036,854,775,807) and therefore more suitable for values where you don't really know the range. However, for the problem at hand, I doubt it makes any difference. Surely converting a few million values takes much less time than rewriting a 50 GB table and all its indexes. So there isn't really a faster way to do what Veem wants. There may however be less disruptive way: He could create a new column with the new values (which takes at least as long but can be done in the background) and then switch it over and drop the old column. hp -- _ | Peter J. Holzer | Story must make more sense than reality. |_|_) | | | | | hjp@xxxxxx | -- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!"
Attachment:
signature.asc
Description: PGP signature