On Fri, Oct 12, 2012 at 10:00 AM, Lonni J Friedman <netllama@xxxxxxxxx> wrote: > On Fri, Oct 12, 2012 at 7:44 AM, Chitra Creta <chitracreta@xxxxxxxxx> wrote: >> Hi, >> >> I currently have a table that is growing very quickly - i.e 7 million >> records in 5 days. This table acts as a placeholder for statistics, and >> hence the records are merely inserted and never updated or deleted. >> >> Many queries are run on this table to obtain trend analysis. However, these >> queries are now starting to take a very long time (hours) to execute due to >> the size of the table. >> >> I have put indexes on this table, to no significant benefit. Some of the >> other strategies I have thought of: >> 1. Purge old data >> 2. Reindex >> 3. Partition >> 4. Creation of daily, monthly, yearly summary tables that contains >> aggregated data specific to the statistics required >> >> Does anyone know what is the best practice to handle this situation? >> >> I would appreciate knowledge sharing on the pros and cons of the above, or >> if there are any other strategies that I could put in place. > > Partitioning is prolly your best solution. 3 & 4 sound like > variations on the same thing. Before you go that route, you should > make sure that your bottleneck is really a result of the massive > amount of data, and not some other problem. Are you sure that the > indices you created are being used, and that you have all the indices > that you need for your queries? Look at the query plan output from > EXPLAIN, and/or post here if you're unsure. Partitioning is not a strategy to improve query performance unless you are exploiting the data structure in some way through the partition. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general