tables with 300+ partitions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All!

I had a big big big table. I tried to divide it in 300 partitions with 30M rows each one. The problem was when I used the table to insert information: the perfomance was LOW.

I did some testing. I created a 300 partitioned empty table. Then, I inserted some rows on it and the perfomance was SLOW too.

SLOW = 1% perfomance compared with a non partitioned table. That is too much.

Then, I did a 10 partitioned table version with 30M rows each one and I inserted rows there. The performance was the same that the no partitioned table version.

I suspect there is a lock problem there. I think every SQL command do a lock to ALL the partitions so the perfomance with concurrent inserts and updates are far worst than the no partitioned version.

The perfomace degrade with the number of partitions. And it degrade fast: I have troubles with 40 partitions.

Am I right? is there a workaround? Can I replace the partitioned version with another schema? any suggestion? I prefer to use something transparent for the program because it uses EJB3 = deep changes and testing on any change to the database layer.


Regards

Pablo Alcaraz


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux