Search Postgresql Archives

Re: Partitioning into thousands of tables?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 6, 2010 at 1:10 AM, Data Growth Pty Ltd
<datagrowth@xxxxxxxxx> wrote:
> I have a table of around 200 million rows, occupying around 50G of disk.  It
> is slow to write, so I would like to partition it better.
>

How big do you expect your data to get?  I have two tables partitioned
into 100 subtables using a modulo operator on the PK integer ID
column.  This keeps the row counts for each partition in the 5-million
range, which postgres handles extremely well.  When I do a mass
update/select that causes all partitions to be scanned, it is very
fast at skipping over partitions based on a quick index lookup.
Nothing really gets hammered.

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux