On Jan 12, 2017, Jonathan Vanasco <postgres@xxxxxxxx> wrote: >On Jan 12, 2017, at 5:52 PM, Merlin Moncure wrote: > >> On Thu, Jan 12, 2017 at 2:19 PM, btober@xxxxxxxxxxxx >> <btober@xxxxxxxxxxxxxxx> wrote: >>> >>> Review manual section 7.8.2. Data-Modifying Statements in WITH >>> >>> >>> https://www.postgresql.org/docs/9.6/static/queries-with.html >> >> this. >> >> with data as (delete from foo where ... returning * ) insert into >> foo_backup select * from data; > >Thanks, btober and merlin. that's exactly what i want. To help you a little more, I just did this for a set of tables within the last week. :) The heart of the program is this sql: my $Chunk_size = 10000; my $Interval = 24; my $sql = " WITH keys AS ( SELECT $pk_column FROM $table WHERE $time_column < NOW() - '$Interval MONTHS'::INTERVAL ORDER BY $pk_column LIMIT $Chunk_size ), data AS ( DELETE FROM $table WHERE $pk_column <= (SELECT MAX($pk_column) FROM keys) RETURNING * ) INSERT INTO archive_$table SELECT * FROM data;"; That's from Perl, but I suspect you can guess as to what each var should be for your application. You can set $Chunk_size to whatever you want. There is obviously a loop around that which executes until we get 0 rows, then we move on to the next table. The point of the chunks was to limit the impact on the production tables as we move data out of them. If you don't have that concern and want to do all rows at once then remove the LIMIT and ORDER BY. HTH, Kevin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general