On Thu, May 21, 2009 at 3:37 PM, Alex Thurlow <alex@xxxxxxxxxxxxxxxxxxx> wrote: > I was hoping to not have to change all my code to automate the partitioning > table creation stuff, but if that's really the best way, I'll check it out. > Thanks for the advice. About a 18 months ago we split a large table with 300+ million rows into 100 partitions. The query speed was improved by at least 2 orders of magnitude. Postgres is exceptionally good at dealing with tables in the 10 million row range, and that's what we gave it. Our primary queries on the data were able to go directly to the right partition, but using constraint exclusion was still nearly just as fast. It was totally worth the 10 days or so it took to set up, test (on a replica!) and migrate the data. In your case you could have a natural migration by just adding the child tables and inserting your new data there and deleting old data from your main table. After 30 days, your main table will be empty and you just truncate it, freeing up all the space. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general