On Fri, Aug 6, 2010 at 1:10 AM, Data Growth Pty Ltd <datagrowth@xxxxxxxxx> wrote: > I have a table of around 200 million rows, occupying around 50G of disk. It > is slow to write, so I would like to partition it better. > How big do you expect your data to get? I have two tables partitioned into 100 subtables using a modulo operator on the PK integer ID column. This keeps the row counts for each partition in the 5-million range, which postgres handles extremely well. When I do a mass update/select that causes all partitions to be scanned, it is very fast at skipping over partitions based on a quick index lookup. Nothing really gets hammered. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general