Re: Large number of partitions of a table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tom Lane wrote:
> Victor Sudakov <vas@xxxxxxxxxx> writes:
> > What's the worst thing to happen if someone runs "SELECT COUNT(*) FROM t" where t has 10000 partitions?
> 
> > 1. The backend will crash?
> 
> Well, yeah, actually.  An ill-advised query will blow out your backend's
> memory consumption, potentially leading to a SIGKILL from the dreaded OOM
> killer[1] (if you're on Linux), resulting in a backend crash and cluster
> restart.  

Why should "SELECT COUNT(*) FROM t" ever consume more than work_mem
even if t has 10000 partitions? I've just run it with 3000 partitions
and it does not even create temporary files let alone consume any
significant amount of RAM.

It is very difficult for me to even imagine a query that would consume
work_mem*N where N>4 or something.  It has to be a very sophisticated
query though documentation says this is possible. And the reasonable
amount of work_mem could be around 32MB.

Even considering parallelism, how would a query to a table with 10000
partitions consume enough RAM to cause OOM, what is the mechanism?

> 
> > 3. Only this particular query (spanning multiple partitions) will be very slow?
> 
> Guaranteed.
> 
> > 4. ?
> 
> You could run the shared lock table to full or nearly full, causing
> concurrent queries to fail all the while your SELECT COUNT(*) is
> leisurely proceeding towards completion.

What's the limit on the shared lock table? 

> > Also, what if it is not a SELECT but an UPDATE query spanning multiple partitions? Does it make any difference?
> 
> On v14, no.  Before that, you have the O(N^2) planner issues
> I mentioned upthread.
> 
> All of these things can be worked around, but you have to ask
> yourself if the benefits you get from using more rather than
> fewer partitions are worth fighting these sorts of fires.

Thank you for the valuable comments!

> 
> [1] https://www.postgresql.org/docs/current/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT

Reading this, I'm not sure if getting a "Cannot allocate memory" error
is really any better for Postgres than being killed by the OOM killer.
I'm not arguing here but just stating that I genuinely don't know.

-- 
Victor Sudakov VAS4-RIPE
http://vas.tomsk.ru/
2:5005/49@fidonet





[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux