Re: Large number of partitions of a table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Victor Sudakov <vas@xxxxxxxxxx> writes:
> Tom Lane wrote:
>> Well, yeah, actually.  An ill-advised query will blow out your backend's
>> memory consumption, potentially leading to a SIGKILL from the dreaded OOM
>> killer[1] (if you're on Linux), resulting in a backend crash and cluster
>> restart.  

> Why should "SELECT COUNT(*) FROM t" ever consume more than work_mem
> even if t has 10000 partitions?

Sure, COUNT(*)'s runtime memory consumption is negligible.
But you're not thinking about overhead --- specifically,

1. 10000 relcache entries for the base tables.

2. If you have N indexes per table, N*10000 relcache entries for
the indexes.  (The planner will probably have looked at all those
indexes, even if it didn't find any use for them.)

3. 10000 SeqScan plan nodes and associated rangetable entries,

4. Likewise, 10000 instances of executor per-node state.

5. 10000 lock table entries (both shared and local lock tables).

6. Probably a few per-relation things I didn't think of.

Admittedly, each of those individual data structures is going
to be at most in the few-kilobytes range, so this is going to total
somewhere in the hundreds of megabytes, which you can probably
tolerate on modern hardware --- unless some dozens of sessions
are all doing the same kind of thing.

Basically, this is likely to work until it doesn't ... and
when it doesn't, you're going to be in the office at 3AM
trying to wrangle a recalcitrant system.  Just sayin'.

			regards, tom lane





[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux