Hello, I have question regarding one of caveats from docs: http://www.postgresql.org/docs/8.3/static/ddl-partitioning.html "Partitioning using these techniques will work well with up to perhaps a hundred partitions; don't try to use many thousands of partitions." What's the alternative? Nested partitioning could do the trick? I have milions of rows(numbers, timestamps and text(<4kb), which are frequently updated and there are also frequent inserts. Partitioning was my first thought about solution of this problem. I want to avoid long lasting locks, index rebuild problems and neverending vacuum. Write performance may be low if at the same time I will have no problem selecting single rows using primary key(bigint).Partitioning seems to be the solution, but I'm sure I will end up with several thousands of automatically generated partitions. Thanks -- Regards, Grzegorz |