On 06/15/12 11:34 AM, Benedict Holland wrote:
I am on postgres 9.0. I don't know the answer to what should be a fairly straight forward question. I have several static tables which are very large (around the order of 14 million rows and about 10GB). They are all linked together through foreign keys and indexed on rows which are queried and used most often. While they are more or less static, update operations do occur. This is not on a super fast computer. It has 2 cores with 8gb of ram so I am not expecting queries against them to be very fast but I am wondering in a structural sense if I should be dividing up the tables into 1 million row tables through constraints and a view. The potential speedup I could see being quite large where postgresql would split off all of the queries into n table chucks running on k cores and then aggregate all of the data for display or operation. Is there any documentation to make postgesql do this and is it worth it?
postgres won't do that, one query is one process. your application could conceivably run multiple threads, each with a seperate postgres connection, and execute multiple queries in parallel, but it would have to do any aggregation of the results itself.
Also, is there a benefit to have one large table or many small tables as far indexes go?
small tables only help if you can query the specific table you 'know' has your data, for instance, if you have time based data, and you put a month in each table, and you know that this query only needs to look at the current month, so you just query that one month's table.
-- john r pierce N 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general