On 01/05/14 19:50, Seb wrote: > Hello, > > I've been looking for a way to write a table into multiple files, and am > wondering if there are some clever suggestions. Say we have a table > that is too large (several Gb) to write to a file that can be used for > further analyses in other languages. The table consists of a timestamp > field and several numeric fields, with records every 10th of a second. > It could be meaningfully broken down into subsets of say 20 minutes > worth of records. One option is to write a shell script that loops > through the timestamp, selects the corresponding subset of the table, > and writes it as a unique file. However, this would be extremely slow > because each select takes several hours, and there can be hundreds of > subsets. Is there a better way? # copy (select * from generate_series(1,1000)) to program 'split -l 100 - /tmp/xxx'; COPY 1000 # \q $ ls -l /tmp/xxxa* -rw------- 1 postgres postgres 292 May 1 19:08 /tmp/xxxaa -rw------- 1 postgres postgres 400 May 1 19:08 /tmp/xxxab -rw------- 1 postgres postgres 400 May 1 19:08 /tmp/xxxac -rw------- 1 postgres postgres 400 May 1 19:08 /tmp/xxxad -rw------- 1 postgres postgres 400 May 1 19:08 /tmp/xxxae -rw------- 1 postgres postgres 400 May 1 19:08 /tmp/xxxaf -rw------- 1 postgres postgres 400 May 1 19:08 /tmp/xxxag -rw------- 1 postgres postgres 400 May 1 19:08 /tmp/xxxah -rw------- 1 postgres postgres 400 May 1 19:08 /tmp/xxxai -rw------- 1 postgres postgres 401 May 1 19:08 /tmp/xxxaj Each of those contains 100 lines. Torsten -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general