Hi,
Does the below kind of approach work for you. I haven't tested this, but would like to give an idea something like below.prev_interval := '0'::interval;
LOOP
IF ( "From Date" + "Interval" <= "To Date") THEN
EXECUTE FORMAT (
$$
COPY (SELECT <Columns Llist> FROM <tableName> WHERE timestamp_column >=%s AND timestamp_column<%s) TO '%s.csv'
$$,
("From Date" + "prev_interval")::TEXT,
("From Date" + "Interval") ::TEXT,
( Filename || (Extract(Epoch from interval)/60)::BIGINT)::TEXT);
$$
COPY (SELECT <Columns Llist> FROM <tableName> WHERE timestamp_column >=%s AND timestamp_column<%s) TO '%s.csv'
$$,
("From Date" + "prev_interval")::TEXT,
("From Date" + "Interval") ::TEXT,
( Filename || (Extract(Epoch from interval)/60)::BIGINT)::TEXT);
prev_interval := "Interval";
"Interval" := "Interval" + "Interval";
"Interval" := "Interval" + "Interval";
ELSE
EXIT FROM LOOP;
END IF;
END LOOP;
On Thu, May 1, 2014 at 11:20 PM, Seb <spluque@xxxxxxxxx> wrote:
Hello,
I've been looking for a way to write a table into multiple files, and am
wondering if there are some clever suggestions. Say we have a table
that is too large (several Gb) to write to a file that can be used for
further analyses in other languages. The table consists of a timestamp
field and several numeric fields, with records every 10th of a second.
It could be meaningfully broken down into subsets of say 20 minutes
worth of records. One option is to write a shell script that loops
through the timestamp, selects the corresponding subset of the table,
and writes it as a unique file. However, this would be extremely slow
because each select takes several hours, and there can be hundreds of
subsets. Is there a better way?
Cheers,
--
Seb
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general