Search Postgresql Archives

Re: Performance impact of hundreds of partitions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 21, 2010 at 6:45 AM, Leonardo F <m_lists@xxxxxxxx> wrote:
> "The partitioning code isn't designed to scale beyond a few dozen partitions"
>
> Is it mainly a planning problem or an execution time problem?
>

I'll bet that is related to the planning and constraint exclusion
parts.  I have a couple of tables split into 100 partitions, and they
work extremely well.  However, I was able to alter my application such
that it almost always references the correct partition directly.  The
only times it does not is when it requires a full scan of all
partitions.  All inserts are direct to proper partition.

In my view, it is a big win to partition large tables such that each
partition holds no more than 5 million rows.  This keeps the indexes
small, and the query engine can quite easily skip huge hunks of them
on many queries.  Also, reindexes can be done pretty quickly and in my
case without seriously disrupting the application -- each table
reindexes in under 5 or 10 seconds.  When this was all one table, a
reindex op would lock up the application for upwards of two hours.

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux