Search Postgresql Archives

Re: Query on partitioned table needs memory n_partitions * work_mem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for the feedback.

So I've managed to reduce the query to a rather simple one:

SELECT
    workitem_n, test_executable_n,
    bool_or(test_resulttype_n IN (2,3))
FROM
    test_runs_raw
GROUP BY
    workitem_n, test_executable_n
LIMIT 10;


The TABLE test_runs_raw has 1000 partitions on RANGE(workitem_n). All the
columns are various integer types. There is an index on workitem_n.



On Thu, 11 Jul 2024, David Rowley wrote:

On Thu, 11 Jul 2024 at 13:19, Dimitrios Apostolou <jimis@xxxxxxx> wrote:
I have a table with 1000 partitions on PostgreSQL 16.
I notice that a fairly complicated query of the form:

SELECT ... GROUP BY ... LIMIT ...

causes the postgres backend process to grow insanely very fast, and the
kernel OOM killer to kill it rather soon.
It seems it tries to allocate at least 1000 * work_mem.

->  Append
   ->  HashAggregate
     ->  Seq Scan
   -> ... 1000 more hashagg+seqscans


Is this allocation pattern (workmem * n_partitions) expected under any
scenario? I can't find it documented.  AFAIU the backend should allocate
up to (depth_of_execution_plan * work_mem) (putting aside the
hash_mem_multiplier and the parallel workers).

Not depth of execution plan. It relates to the number of nodes in the
plan which allocate work_mem or work_mem * hash_mem_multiplier.

There is some documentation in [1]:

"Note that a complex query might perform several sort and hash
operations at the same time"

The latest query is not complex at all and I don't see it doing 1000s of
operations at the same time. By "number of nodes" would you add up all
HashAggregate nodes under an Append node? Here is part of the EXPLAIN
ANALYZE output:

 Limit  (cost=0.01..28.00 rows=10 width=7) (actual time=43120.466..43292.246 rows=10 loops=1)
   Output: test_runs_raw.workitem_n, test_runs_raw.test_executable_n, (bool_or((test_runs_raw.test_resulttype_n = ANY ('{2,3}'::integer[]))))
   Buffers: shared hit=96 read=883975
   I/O Timings: shared read=16284.731
   ->  Append  (cost=0.01..3416299633.71 rows=1220556171 width=7) (actual time=42968.794..43139.855 rows=10 loops=1)
         Buffers: shared hit=96 read=883975
         I/O Timings: shared read=16284.731
         ->  HashAggregate  (cost=0.01..0.02 rows=1 width=7) (actual time=10.662..10.663 rows=0 loops=1)
               Output: test_runs_raw.workitem_n, test_runs_raw.test_executable_n, bool_or((test_runs_raw.test_resulttype_n = ANY ('{2,3}'::integer[])))
               Group Key: test_runs_raw.workitem_n, test_runs_raw.test_executable_n
               Batches: 1  Memory Usage: 24kB
               ->  Seq Scan on public.test_runs_raw__part_max20k test_runs_raw  (cost=0.00..0.00 rows=1 width=8) (actual time=9.960..9.961 rows=0 loops=1)
                     Output: test_runs_raw.workitem_n, test_runs_raw.test_executable_n, test_runs_raw.test_resulttype_n
         ->  HashAggregate  (cost=0.01..0.02 rows=1 width=7) (actual time=1.913..1.914 rows=0 loops=1)
               Output: test_runs_raw_1.workitem_n, test_runs_raw_1.test_executable_n, bool_or((test_runs_raw_1.test_resulttype_n = ANY ('{2,3}'::integer[])))
               Group Key: test_runs_raw_1.workitem_n, test_runs_raw_1.test_executable_n
               Batches: 1  Memory Usage: 24kB
               ->  Seq Scan on public.test_runs_raw__part_max40k test_runs_raw_1  (cost=0.00..0.00 rows=1 width=8) (actual time=1.031..1.031 rows=0 loops=1)
                     Output: test_runs_raw_1.workitem_n, test_runs_raw_1.test_executable_n, test_runs_raw_1.test_resulttype_n
[ .... 1000s of similar HashAggregate nodes ... ]
Settings: temp_buffers = '32MB', work_mem = '32MB', effective_io_concurrency = '300', max_parallel_workers_per_gather = '0', enable_hashjoin = 'off', enable_partitionwise_join = 'on', enable_partitionwise_aggregate = 'on', random_page_cost = '1.1', effective_cache_size = '6GB', from_collapse_limit = '24', join_collapse_limit = '24'
 Planning:
   Buffers: shared hit=377
 Planning Time: 1503.800 ms
 Execution Time: 56515.185 ms
(5382 rows)

Memory usage on each HashAggregate is logged as 24KB (many HashAggregates
are missing that info though), I guess the EXPLAIN output is missing some
important part of the allocations here since I'm seeing MBs of allocations
per node.

I can't help but see this as a bug. I see many issues:

* postgres is not reading from partitions in parallel, but one after the
  other. It shouldn't need all this memory simultaneously.

* The memory is unnecessarily allocated early on, before any partitions
  are actually aggregated. I know this because I/O is slow on this device
  and the table sizes are huge, it's simply not possible that postgres
  went through all partitions and blew up the memory. That would take
  hours, but the OOM happens seconds after I start the query.

* The memory is not only allocated by the planner, but it's actually
  accessed. Libc's malloc() has no problem allocating gigabytes more than
  what I have available, growing the VSZ memory size without swapping out
  a single byte. That's because my kernel was set to overcommit.  The fact
  that postgres grows its RSS memory too and swaps out other processes and
  finally gets killed, is because it actually writes to that memory!
  I wonder what it writes, if it hasn't read any of the data on the
  partitions. :-)

For your consideration, here is also a memory dump from when the backend
consumes all the memory (work_mem increased to 256MB), and I've disabled
overcommit in the kernel:


TopMemoryContext: 573520 total in 8 blocks; 214096 free (27 chunks); 359424 used
  TopTransactionContext: 8192 total in 1 blocks; 7752 free (1 chunks); 440 used
  Record information cache: 8192 total in 1 blocks; 1616 free (0 chunks); 6576 used
  RegexpCacheMemoryContext: 1024 total in 1 blocks; 760 free (0 chunks); 264 used
    RegexpMemoryContext: 44032 total in 5 blocks; 36400 free (17 chunks); 7632 used: ^(test_runs_raw)$
  Btree proof lookup cache: 8192 total in 1 blocks; 592 free (0 chunks); 7600 used
  Operator lookup cache: 24576 total in 2 blocks; 10792 free (3 chunks); 13784 used
  TableSpace cache: 8192 total in 1 blocks; 2128 free (0 chunks); 6064 used
  Type information cache: 24368 total in 2 blocks; 2648 free (0 chunks); 21720 used
  RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296 used
  MessageContext: 16777216 total in 12 blocks; 2924704 free (11 chunks); 13852512 used
    partition directory: 8192 total in 1 blocks; 592 free (0 chunks); 7600 used
  Operator class cache: 8192 total in 1 blocks; 592 free (0 chunks); 7600 used
  smgr relation table: 524288 total in 7 blocks; 231192 free (28 chunks); 293096 used
  PgStat Shared Ref Hash: 50224 total in 2 blocks; 688 free (0 chunks); 49536 used
  PgStat Shared Ref: 90112 total in 14 blocks; 3648 free (14 chunks); 86464 used
  PgStat Pending: 294912 total in 39 blocks; 193864 free (788 chunks); 101048 used
  TransactionAbortContext: 32768 total in 1 blocks; 32504 free (0 chunks); 264 used
  Portal hash: 8192 total in 1 blocks; 592 free (0 chunks); 7600 used
  TopPortalContext: 8192 total in 1 blocks; 7664 free (0 chunks); 528 used
    PortalContext: 1024 total in 1 blocks; 624 free (0 chunks); 400 used: <unnamed>
      ExecutorState: 8462432 total in 13 blocks; 3316568 free (9 chunks); 5145864 used
        HashAgg meta context: 8192 total in 1 blocks; 7720 free (0 chunks); 472 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        HashAgg meta context: 100671536 total in 2 blocks; 5528 free (0 chunks); 100666008 used
          ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        HashAgg meta context: 201334832 total in 2 blocks; 5528 free (0 chunks); 201329304 used
          ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
        ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
[...]
        2907 more child contexts containing 13752609344 total in 3787 blocks; 25715912 free (0 chunks); 13726893432 used
-- NOTE: the total number above is the culprit for the OOM.

  Relcache by OID: 131072 total in 5 blocks; 38744 free (13 chunks); 92328 used
  CacheMemoryContext: 47147632 total in 599 blocks; 7315600 free (2 chunks); 39832032 used
    relation rules: 16384 total in 5 blocks; 2272 free (1 chunks); 14112 used: pg_settings
    index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: pg_trigger_oid_index
    index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: pg_trigger_tgconstraint_index
    index info: 1024 total in 1 blocks; 40 free (0 chunks); 984 used: pg_publication_rel_prpubid_index
    index info: 2048 total in 2 blocks; 728 free (2 chunks); 1320 used: pg_policy_polrelid_polname_index
    index info: 1024 total in 1 blocks; 72 free (0 chunks); 952 used: pg_policy_oid_index
    relation rules: 16384 total in 5 blocks; 7144 free (1 chunks); 9240 used: pg_roles
    index info: 2048 total in 2 blocks; 544 free (1 chunks); 1504 used: pg_inherits_relid_seqno_index
    index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: pg_tablespace_spcname_index
    index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: pg_constraint_contypid_index
    index info: 2048 total in 2 blocks; 688 free (1 chunks); 1360 used: pg_constraint_conname_nsp_index
    index info: 2048 total in 2 blocks; 912 free (0 chunks); 1136 used: pg_constraint_conparentid_index
    index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: pg_attrdef_oid_index
    index info: 2048 total in 2 blocks; 688 free (1 chunks); 1360 used: pg_attrdef_adrelid_adnum_index
    index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used: pg_am_oid_index
    index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: pg_am_name_index
    index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: pg_rewrite_oid_index
    CachedPlanSource: 4096 total in 3 blocks; 688 free (0 chunks); 3408 used: SELECT * FROM pg_catalog.pg_rewrite WHERE ev_class = $1 AND rulename = $2
      CachedPlanQuery: 8192 total in 4 blocks; 4016 free (1 chunks); 4176 used
    SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used
    index info: 2048 total in 2 blocks; 656 free (2 chunks); 1392 used: pg_class_tblspc_relfilenode_index
    index info: 2048 total in 2 blocks; 616 free (1 chunks); 1432 used: pg_toast_2619_index
    index info: 2048 total in 2 blocks; 728 free (2 chunks); 1320 used: test_runs_raw__part_max20000k_pkey
[...]
    1017 more child contexts containing 2142696 total in 2014 blocks; 684544 free (1928 chunks); 1458152 used
  WAL record construction: 50200 total in 2 blocks; 6376 free (0 chunks); 43824 used
  PrivateRefCount: 8192 total in 1 blocks; 2648 free (0 chunks); 5544 used
  MdSmgr: 131072 total in 5 blocks; 22552 free (1 chunks); 108520 used
  LOCALLOCK hash: 262144 total in 6 blocks; 59376 free (23 chunks); 202768 used
  GUCMemoryContext: 24576 total in 2 blocks; 9440 free (4 chunks); 15136 used
    GUC hash table: 32768 total in 3 blocks; 12704 free (5 chunks); 20064 used
  Timezones: 104112 total in 2 blocks; 2648 free (0 chunks); 101464 used
  ErrorContext: 8192 total in 1 blocks; 7928 free (2 chunks); 264 used Grand total: 16246728816 bytes in 6875 blocks; 41833616 free (3045 chunks); 16204895200 used




Also, see the warning about execution time memory in [2].

NOTE: after having written the above message, it occured to me that I have
enable_partitionwise_aggregate=on. And Turning it off fixes the issue and
makes the query faster too! Expected behaviour or bug?

enable_partitionwise_aggregate=on causes this query to perform an
aggregate per partition.  If your GROUP BY clause values are
distributed evenly throughout all partitions then you might find it's
not much slower to execute the query with
enable_partitionwise_aggregate=off.

The GROUP BY first-clause values (workitem_n) are not evenly distributed,
in fact the partitions are split by that. The rest indeed are.

Thank you, I'll turn off this flag globally to avoid such behaviour.


It's understandable that how PostgreSQL uses work_mem isn't ideal
here, but unfortunately, that's the state of affairs, currently. You
might want to reconsider your enable_partitionwise_aggregate setting
and/or how many partitions you have.

Having wasted long time in that, the minimum I can do is submit a
documentation patch. At enable_partitionwise_aggregate someting like
"WARNING it can increase the memory usage by at least
n_partitions * work_mem". How do I move on for such a patch? Pointers
would be appreciated. :-)


Thank you,
Dimitris


[1] https://www.postgresql.org/docs/current/runtime-config-resource.html
[2] https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-BEST-PRACTICES







[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux