On Wed, Apr 27, 2011 at 6:26 PM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote: > I had a similar problem about a year ago, The parent table had about > 1.5B rows each with a unique ID from a bigserial. My approach was to > create all the child tables needed for the past and the next month or > so. Then, I simple did something like: Note that I also created all the triggers to put the rows into the right tables as well. > begin; > insert into table select * from only table where id between 1 and 10000000; > delete from only table where id between 1 and 10000000; > -- first few times check to make sure it's working of course > commit; > begin; > insert into table select * from only table where id between 10000001 > and 20000000; > delete from only table where id between 10000001 and 20000000; > commit; > > and so on. New entries were already going into the child tables as > they showed up, old entries were migrating 10M rows at a time. This > kept the moves small enough so as not to run the machine out of any > resource involved in moving 1.5B rows at once. > -- To understand recursion, one must first understand recursion. -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin