On 03/20/2012 07:26 PM, Jim Green wrote:
On 20 March 2012 22:21, David Kerr<dmk@xxxxxxxxxxxxxx> wrote:
I'm imagining that you're loading the raw file into a temporary table that
you're going to use to
process / slice new data data into your 7000+ actual tables per stock.
Thanks! would "slice new data data into your 7000+ actual tables per
stock." be a relatively quick operation?
well, it solves the problem of having to split up the raw file by stock
symbol. From there you can run multiple jobs in parallel to load
individual stocks into their individual table which is probably faster
than what you've got going now.
It would probably be faster to load the individual stocks directly from
the file but then, as you said, you have to split it up first, so that
may take time.
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general