I've looked through the docs, but I'm unable to find complete answers to my questions, so thanks in advance if you can lend any expertise.
Here's the situation I'm in (always a good opener, right? :) ):
We've got a postgres database with *a lot* of data in one table. On the order of 100 million rows at this point. Postgres is, of course, handling it with aplomb.
However, when the engineer who was setting up our schema got things in place, he neglected to think through how many entries we might have eventually and went with the default value for the 'id' column in this database (yes, we are a rails shop, no, he shouldn't have been allowed to design the schema.)
This is 'integer' which, in my understanding, defaults to 32 bit. Sadly, this is not gonna get it done: we will hit that limit some time next year, depending on growth.
OK, simple enough, just run something like this:
ALTER TABLE my_table ALTER COLUMN id TYPE bigint;
However, given the size of this table, I have no idea how long something like this might take. In general I've had a tough time getting feedback from postgres on the progress of a query, how long something might take, etc.
So my question is: is there a way to understand roughly how long something like this might take? Our DB is out on crappy Amazon ec2 instances, so we don't exactly have screamers set up. Any tools I can use? Any tips? I don't need anything exact, just on the order of minutes, hours, days or weeks.
Again, thanks in advance,
Carson