Re: tables with 300+ partitions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Pablo Alcaraz wrote:
I had a big big big table. I tried to divide it in 300 partitions with 30M rows each one. The problem was when I used the table to insert information: the perfomance was LOW.

That's very vague. What exactly did you do? Just inserted a few rows, or perhaps a large bulk load of millions of rows? What was the bottleneck, disk I/O or CPU usage? How long did the operation take, and how long did you expect it to take?

I did some testing. I created a 300 partitioned empty table. Then, I inserted some rows on it and the perfomance was SLOW too.

SLOW = 1% perfomance compared with a non partitioned table. That is too much.

Then, I did a 10 partitioned table version with 30M rows each one and I inserted rows there. The performance was the same that the no partitioned table version.

That suggests that the CPU time is spent in planning the query, possibly in constraint exclusion. But that's a very different scenario from having millions of rows in each partition.


I suspect there is a lock problem there. I think every SQL command do a lock to ALL the partitions so the perfomance with concurrent inserts and updates are far worst than the no partitioned version.

Every query takes an AccessShareLock on each partition, but that doesn't prevent concurrent inserts or updates, and acquiring the locks isn't very expensive. In other words: no, that's not it.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux