On 03/20/2012 07:08 PM, Jim Green wrote:
On 20 March 2012 22:03, David Kerr<dmk@xxxxxxxxxxxxxx> wrote:
\copy on 1.2million rows should only take a minute or two, you could make
that table "unlogged"
as well to speed it up more. If you could truncate / drop / create / load /
then index the table each
time then you'll get the best throughput.
Thanks, Could you explain on the "runcate / drop / create / load /
then index the table each time then you'll get the best throughput."
part.. or point me to some docs?..
Jim
I'm imagining that you're loading the raw file into a temporary table
that you're going to use to
process / slice new data data into your 7000+ actual tables per stock.
So that table doesn't probably need to be around once you've processed
your stocks through
that table. so you could just truncate/drop it after you're done.
When you create it, if you avoid indexes the inserts will be faster (it
doesn't have to rebuild the index every
insert) so then once the table is loaded, you create the indexes (So
it's actually useful) and then process the
data into the various stock tables.
Dave
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general