Search Postgresql Archives

Re: huge price database question..

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/20/2012 07:08 PM, Jim Green wrote:
On 20 March 2012 22:03, David Kerr<dmk@xxxxxxxxxxxxxx>  wrote:

\copy on 1.2million rows should only take a minute or two, you could make
that table "unlogged"
as well to speed it up more.  If you could truncate / drop / create / load /
then index the table each
time then you'll get the best throughput.
Thanks, Could you explain on the "runcate / drop / create / load /
then index the table each time then you'll get the best throughput."
part.. or point me to some docs?..

Jim

I'm imagining that you're loading the raw file into a temporary table that you're going to use to
process / slice new data data into your 7000+ actual tables per stock.

So that table doesn't probably need to be around once you've processed your stocks through
that table. so you could just truncate/drop it after you're done.

When you create it, if you avoid indexes the inserts will be faster (it doesn't have to rebuild the index every insert) so then once the table is loaded, you create the indexes (So it's actually useful) and then process the
data into the various stock tables.

Dave




--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux