On Wed, Aug 22, 2012 at 4:06 PM, Nick <nboutelier@xxxxxxxxx> wrote: > I have a table with 40 million rows and haven't had any performance issues yet. > > Are there any rules of thumb as to when a table starts getting too big? No. Assuming you decided it were "too big", what could you do about it? If there are chunks of data that you don't need anymore, why wait for the table to be become too big before removing it? And partitioning very often isn't the answer, either. There are very few problems that ill-conceived partitioning won't make worse. And there are very many problems which even the best-conceived partitioning will fail to improve. If you have one of the cases where partitioning is a good solution, don't wait for the table to become 'too big'. Just go do it. > For example, maybe if the index size is 6x the amount of ram, if the table is 10% of total disk space, etc? If you need to be able to rapidly insert new rows in bulk, and you can't sort them before inserting because there are multiple indexes with completely different sort order and they cover the entire key range of at least some of the indexes, than your performance will collapse long before you get to 6x the amount of RAM. But, what can you do about it? Maybe partitioning will fix this, maybe it won't. If it will, why wait for a rule of thumb to be met? If it won't, what do you actually do once the rule of thumb is met? I guess one rule of them I would have is, if for some reason I had to cluster or reindex the table, how long would it take to do so? If that is much longer than I can reasonably schedule as a maintenance window, I would be worried. Cheers, Jeff -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general