Thank you! > For example, if you have one Parallel Hash Join in your plan, it could > allocate up to 512MB * 3 of shared memory (3 = leader process + 2 > workers). I'm executing the query with smaller work_mem, it will take some time. But I still confused why it used all /dev/shm (4GB) and fails with "no space left" error while work_mem = 512MB. > If you > run EXPLAIN ANALYZE you'll see how much memory is used by individual > operations. I cannot run EXPLAIN ANALYSE, because the query fails. This is explanation for the query: EXPLAIN CREATE MATERIALIZED VIEW IF NOT EXISTS new_mat_view AS SELECT * FROM my_view INNER JOIN another_mat_view USING (oid) ORDER BY oid, field_name; Gather Merge (cost=5696039356565.87..10040767101103.24 rows=37237923518438 width=31) Workers Planned: 2 -> Sort (cost=5696039355565.85..5742586759963.90 rows=18618961759219 width=31) Sort Key: my_table.oid, my_table.field_name -> Parallel Hash Join (cost=11030236131.39..255829470118.27 rows=18618961759219 width=31) Hash Cond: (another_mat_view.oid = my_table.oid) -> Parallel Seq Scan on another_mat_view (cost=0.00..652514.56 rows=31645556 width=8) -> Parallel Hash (cost=636676233.38..636676233.38 rows=20353804801 width=31) -> Parallel Seq Scan on my_table (cost=0.00..636676233.38 rows=20353804801 width=31) Filter: (flag = '0000000000000000'::bit(16)) Konstantin