On 09/30/2011 11:41 PM, John R Pierce wrote:
On 09/30/11 2:09 AM, J.V. wrote:
Some tables have millions of rows,
well, something like UPDATE tablename SET
id=generate_series(1,numberofrows); will update every row to a
sequential value. However, I have no idea how you would match the
foreign key references in other tables to these new sequence values.
There are two ways to do that.
You can add a *new* column for the new keys and generate them. Then you
add a matching empty column to each referencing table and fill it using
a JOIN against the old key and ALTER each referencing table to add the
FOREIGN KEY before dropping the old key column. Finally, you drop the
old key column in the main table.
Alternately, you can ALTER all the foreign key references to be CASCADE,
then UPDATE the main table to set new keys. PostgreSQL will cascade the
changes to the referencing tables.
The second method is simpler and you might think it'd be faster, but it
probably won't be. The first method requires one sequential re-write of
each table when the UPDATE to fill the new key columns runs, but is
otherwise just a series of JOINs on key columns. On the other hand, the
second method requires *lots* of *random* writes all over the place on
the referencing tables, and is likely to be a lot slower even if you
have indexes on your foreign key columns. If you *don't* have indexes on
your foreign key columns the second method is going to be spectacularly,
amazingly, stunningly slow.
--
Craig Ringer
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general