On 03/20/2012 09:49 PM, Jim Green wrote:
On 20 March 2012 22:43, Andy Colson<andy@xxxxxxxxxxxxxxx> wrote:
Do you ever plan on batch deleted a BUNCH of records?
no, after historical data is populated, I'll only add data daily. no delete..
Do you ever want to do read all of one symbol (like, select avg(high) from
stocks where symbol = 'bob')?
yes its possible but I would more likely grab the data to R and get
the avg in R..
Thanks,
Jim.
-Andy
Based on your answers:
my queries would mostly consider select for one symbol for one
particular day or a few hours in a particular day, occasionally I
would do select on multiple symbols for some timestamp range
one big table would probably be about the same speed as multiple smaller tables. Either way you'll hit an index first for the above usage.
no, after historical data is populated, I'll only add data daily. no delete..
Truncating/dropping a table is much faster than a huge delete... but if you'll never delete then it really doenst matter.
yes its possible but I would more likely grab the data to R and get
the avg in R..
but... to get the data to R you still have to step thru the entire table.
If you have a partition per symbol (which is the same as having a separate table per symbol) then I believe you can step thru it faster (just a table scan) than if you had one big table (index lookups on symbol). So in this case, partitioned would be better (I think).
So the score is:
One big table = 1
Doesn't matter = 1
Partitioned = 1
Of course, there are probably other usage patters I'm not aware of. And I also am assuming some things based on what I've heard -- not of actual experience.
I'm not sure this was really helpful :-)
-Andy
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general