Search Postgresql Archives

Re: PGSQL 11.4: shared_buffers and /dev/shm size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Thomas,

Thank you for explanation. work_mem = 512MB and max_parallel_workers_per_gather = 2 and I run only one Postgres instance and only one query. EXPLAIN shows "Workers Planned: 2" for this query. Why it can use more than 1GB of /dev/shm?


Konstantin

> On 9 Jul 2019, at 13:51, Thomas Munro <thomas.munro@xxxxxxxxx> wrote:
> 
> On Tue, Jul 9, 2019 at 10:15 PM Jean Louis <bugs@gnu.support> wrote:
>> * Konstantin Malanchev <hombit@xxxxxxxxx> [2019-07-09 12:10]:
>>> I have 8 GB RAM and /dev/shm size is 4GB, and there is no significant memory usage by other system processes. I surprised that Postgres uses more space in /dev/shm than sharred_buffers parameter allows, probably I don't understand what this parameter means.
>>> 
>>> I have no opportunity to enlarge total RAM and probably this query requires too much RAM to execute. Should Postgres just use HDD as temporary storage in this case?
>> 
>> That I cannot know. I know that /dev/shm could
>> grow as much as available free RAM.
> 
> Hi,
> 
> PostgreSQL creates segments in /dev/shm for parallel queries (via
> shm_open()), not for shared buffers.  The amount used is controlled by
> work_mem.  Queries can use up to work_mem for each node you see in the
> EXPLAIN plan, and for each process, so it can be quite a lot if you
> have lots of parallel worker processes and/or lots of
> tables/partitions being sorted or hashed in your query.
> 
> -- 
> Thomas Munro
> https://enterprisedb.com







[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux