Ow Mun Heng schrieb:
Is there any quick hacks to do this quickly? There's around
20-30million
rows of data.
I want to change a column type from varchar(4) to varchar()
table size is ~10-15GB (and another 10-15G for indexes)
What would be the preferrred way of doing it? SHould I be dropping the
indexes 1st to make things faster? Would it matter?
The punch line is that since the databases are connected via slony, this
makes it even harder to pull it off. My last try got the DDL change
completed in like 3 hours (smallest table of the bunch) and it hung
everything
Before Postgresql supported "alter table ... type ... " conversions, I
did it a few times when I detected later that my varchar() fields were
too short, and it worked perfectly.
Example:
{OLDLEN} = 4
{NEWLEN} = 60
update pg_attribute
set atttypmod={NEWLEN}+4
where attname='the-name-of-the-column'
and attrelid=(select oid from pg_class where
relname='the-name-of-the-table')
and atttypmod={OLDLEN}+4;
This worked very well when you want to increase the maximum length,
don't try to reduce the maximum length this way!
Disclaimer: I do not know if slony might be have a problem with this.