No, there are no long text or bytea fields, just int, bigint, ctime, limited varchar types. However, one table has 2250 partitions and I've seen several selects to this table (based on the primary key of individual partitions) with a huge IN () list (about 500 keys). Don't expect this to be efficient, but unsure that these queries caused such memory consumption. С уважением, Александр Пыхалов, программист отдела телекоммуникационной инфраструктуры управления информационно-коммуникационной инфраструктуры ЮФУ ________________________________________ От: Laurenz Albe <laurenz.albe@xxxxxxxxxxx> Отправлено: 17 октября 2019 г. 14:09 Кому: Alexander Pyhalov; pgsql-general@xxxxxxxxxxxxxxxxxxxx Тема: Re: PostgreSQL memory usage Alexander Pyhalov wrote: > After looking at my DBMS server for some time I've understood that I don't understand what was going on... > > A server has 48 GB RAM. shared_buffers is set to 12GB, work_mem - to 32MB, pgbouncer > in transaction mode is used to connect pool (pool size 80) to PostgreSQL 10.5 server. > > I see that at some point several postgresql backends start consuming about 16 GB RAM. > If we account for shared_buffers, it meens 4 GB RAM for private backend memory. > How can we achieve such numbers? I don't see any long-running (or complex) queries > (however, there could be long-running transactions and queries to large partitioned tables). > But how could they consume 512* work_mem memory? A wild guess: you are using large data items, or you have PostGIS installed, which is known to use lots of memory for certain operations, which is not limited by "work_mem". Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com