On Fri, Aug 6, 2010 at 8:08 AM, Joshua Tolley <eggyknap@xxxxxxxxx> wrote: > On Fri, Aug 06, 2010 at 03:10:30PM +1000, Data Growth Pty Ltd wrote: >> Is there any significant performance problem associated with partitioning >> a table into 2500 sub-tables? I realise a table scan would be horrendous, >> but what if all accesses specified the partitioning criteria "sid". Such >> a scheme would be the simplest to maintain (I think) with the best >> localisation of writes. > > I seem to remember some discussion on pgsql-hackers recently about the number > of partitions and its effect on performance, especially planning time. > Unfortunately I can't find it right now, but in general the conclusion was > it's bad to have lots of partitions, where "lots" is probably 100 or more. When it comes to planning time, it's a trade off. If you have a reporting database that routinely runs queries that take 30 seconds to 30 minutes, an extra 10 seconds planning is no big deal. If you need to have your queries run in sub-second times, then an extra 10 seconds is a very big deal. We partition our stats data at work by day, and keep it around for years. So, we have 600 to 1000 partitions there. But any query we run takes minutes to run, so a little extra planning time is no big deal there. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general