Re: OOM Killing on Docker while ANALYZE running

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



HI Alvaro: 


These are our values


default_statistics_target
---------------------------
 10000

 
 relpages | reltuples
----------+-----------
  1639345 |     1e+08


                      Table "public.pgbench_accounts"
  Column  |     Type      | Modifiers | Storage  | Stats target | Description
----------+---------------+-----------+----------+--------------+-------------
 aid      | integer       | not null  | plain    |              |
 bid      | integer       |           | plain    |              |
 abalance | integer       |           | plain    |              |
 filler   | character(84) |           | extended |              |
Indexes:
    "pgbench_accounts_pkey" PRIMARY KEY, btree (aid)
Options: fillfactor=100



I guess you hit the knotch: 


pgbench=# set default_statistics_target =10;
SET
pgbench=# analyze verbose pgbench_accounts ;
INFO:  analyzing "public.pgbench_accounts"
INFO:  "pgbench_accounts": scanned 3000 of 1639345 pages, containing 183000 live rows and 0 dead rows; 3000 rows in sample, 100000000 estimated total rows
ANALYZE
pgbench=# set default_statistics_target =100;
SET
pgbench=# analyze verbose pgbench_accounts ;
INFO:  analyzing "public.pgbench_accounts"
INFO:  "pgbench_accounts": scanned 30000 of 1639345 pages, containing 1830000 live rows and 0 dead rows; 30000 rows in sample, 100000001 estimated total rows
ANALYZE
pgbench=# set default_statistics_target =1000;
SET
pgbench=# analyze verbose pgbench_accounts ;
INFO:  analyzing "public.pgbench_accounts"
INFO:  "pgbench_accounts": scanned 300000 of 1639345 pages, containing 18300000 live rows and 0 dead rows; 300000 rows in sample, 100000008 estimated total rows
ANALYZE
pgbench=# set default_statistics_target =10000;
SET
pgbench=# analyze verbose pgbench_accounts ;
INFO:  analyzing "public.pgbench_accounts"
INFO:  "pgbench_accounts": scanned 1639345 of 1639345 pages, containing 100000000 live rows and 0 dead rows; 3000000 rows in sample, 100000000 estimated total rows
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

Kind regards 


Jorge Daniel Fernandez 



From: Alvaro Herrera <alvherre@xxxxxxxxxxxxxx>
Sent: Thursday, January 25, 2018 4:47 PM
To: Jorge Daniel
Cc: pgsql-admin@xxxxxxxxxxxxxxxxxxxx; pgsql-admin@xxxxxxxxxxxxxx; dyuryeva@xxxxxxxxxxxx
Subject: Re: OOM Killing on Docker while ANALYZE running
 
Jorge Daniel wrote:
> Hi guys , I'm dealing with OOM killing on Postgresql 9.4.8 running on docker ,

What is your statistics target?  analyze is supposed to acquire samples
of the data, not the whole table ...

> pgbench=# analyze verbose pgbench_accounts;
> INFO:  analyzing "public.pgbench_accounts"
> INFO:  "pgbench_accounts": scanned 1639345 of 1639345 pages, containing 100000000 live rows and 0 dead rows; 3000000 rows in sample, 100000000 estimated total rows

Here it seems to be saying that it reads all 1.6 million pages ...

--
Álvaro Herrera                https://www.2ndQuadrant.com/


PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux