Search Postgresql Archives

Re: break table into portions for writing to separate files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
several Gb is about 1GB, that's not too much. In case you meant 'several GB', that shouldn't be a problem as well.

The first thing I'd do would be creating an index on the column used for dividing the data. Then I'd just use the command COPY with a proper select to save the data to a file.

If each select lasts for several hours, make the select faster. Good index usually helps. You can also post here the query which lasts for too long, and attach its plan as well.

regards,
Szymon



On 1 May 2014 19:50, Seb <spluque@xxxxxxxxx> wrote:
Hello,

I've been looking for a way to write a table into multiple files, and am
wondering if there are some clever suggestions.  Say we have a table
that is too large (several Gb) to write to a file that can be used for
further analyses in other languages.  The table consists of a timestamp
field and several numeric fields, with records every 10th of a second.
It could be meaningfully broken down into subsets of say 20 minutes
worth of records.  One option is to write a shell script that loops
through the timestamp, selects the corresponding subset of the table,
and writes it as a unique file.  However, this would be extremely slow
because each select takes several hours, and there can be hundreds of
subsets.  Is there a better way?

Cheers,

--
Seb



--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux