> On 6/5/2015 11:37 AM, Ravi Krishna wrote: >> >> Why is PG even re-writing all rows when the data type is being changed >> from smaller (int) to larger (bigint) type, which automatically means >> existing data is safe. Like, changing from varchar(30) to varchar(50) >> should involve no rewrite of existing rows. > > > > int to bigint requires storage change, as all bigints are 64 bit while all > ints are 32 bit. it would be a MESS to try and keep track of a table > that has some int and some bigint storage of a given field. > > now, varchar 30 to 50, that I can't answer, are you sure that does a > rewrite? the storage is exactly the same for those. Perhaps I was not clear. I don't expect any re-write for a change of varchar(30) to 50 for the same reason you mentioned above. Yes it is normal to expect the storage size for bigint to be different than 32 bit, but then PG uses MVCC. If and when current row gets updated, MVCC will ensure a new row to be written, which can fix the data type. I believe PG adds or drops a col without rewrite because of MVCC. For eg, I add a new col-T in a table and drop col-S via a single ALTER TABLE command. I am assuming this is what happens internally: In the above case PG will simply do a dictionary update of meta tables. So all new rows will reflect col-T and as and when the old rows get modified, it too will get updated to the new structure. If my above understand is correct, why it is not applied in case of int -> bigint change. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general