Re: Re: best practice for moving millions of rows to child table when setting up partitioning?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I had a similar problem about a year ago,  The parent table had about
1.5B rows each with a unique ID from a bigserial.  My approach was to
create all the child tables needed for the past and the next month or
so.  Then, I simple did something like:

begin;
insert into table select * from only table where id between 1 and 10000000;
delete from only table where id between 1 and 10000000;
-- first few times check to make sure it's working of course
commit;
begin;
insert into table select * from only table where id between 10000001
and 20000000;
delete from only table where id between 10000001 and 20000000;
commit;

and so on.  New entries were already going into the child tables as
they showed up, old entries were migrating 10M rows at a time.  This
kept the moves small enough so as not to run the machine out of any
resource involved in moving 1.5B rows at once.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux