Search Postgresql Archives

Re: constant crashing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ignore my thread, I guess there might be a bug given it segfaulted. 

On Mon, Apr 15, 2024, 12:48 AM Vijaykumar Jain <vijaykumarjain.github@xxxxxxxxx> wrote:


On Sun, 14 Apr 2024 at 21:50, jack <jack4pg@xxxxxxx> wrote:
The full error reads:
server closed the connection expectantly
This probably means the server terminated abnormally
before or while processing the request.
error: connection to server was lost

PostgreSQL 16.2

I also believe it is a resource issue which can be rectified with a setting, but which setting?
If you were updating 100 million records what settings would you adjust?

Here are the updates I am performing on the 100 million records:
UPDATE table SET category_modified = UPPER(category);
UPDATE table SET category_modified = REGEXP_REPLACE(REPLACE(REPLACE(category_modified, '''','-'), '`', '-'), '\s{2,}', ' ', 'g') WHERE  AND LENGTH(category_modified)>1 AND POSITION('--' IN category_modified)>0;
UPDATE table SET category_modified = REPLACE(category_modified,' ','-');
UPDATE table SET category_modified = CASE WHEN category_modified IS NOT NULL THEN regexp_replace(category_modified, '[^a-zA-Z]$', '') ELSE NULL END;
UPDATE table SET category_modified = regexp_replace(category_modified, '-{2,}', '-', 'g');
UPDATE table SET category_modified = SUBSTRING(category_modified FROM 1 FOR LENGTH(category_modified) - 1) WHERE LENGTH(category_modified)>1 AND category_modified LIKE '%-';


independent of best practices, i just want to check if there is a leak.
I created a sample table with text data and ran updates like yours and I could not see mem growth, but I have a small vm and ofc your  category_modified field might be more complex than simple text fields for 30-40 chars.

can you grab the pid of your psql backend and (if you have pidstat installed) monitor resource usage for that pid

postgres@pg:~/udemy/16$ psql
psql (16.2 (Ubuntu 16.2-1.pgdg22.04+1))
Type "help" for help.

postgres=# select pg_backend_pid();                                                                                          pg_backend_pid
----------------
           1214
(1 row)

# pidstat 2 100 -rud -h -p 1214 
(get all stats for that pid) that might help to figure out if there is a leak or the server has other things competing for memory and your updates were picked by the killer.

Linux 5.15.0-101-generic (pg)   04/15/24        _x86_64_        (1 CPU)

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
00:40:25      113      1214    0.00    0.00    0.00    0.00    0.00     0      0.00      0.00  354112  220940  24.18      0.00      0.00      0.00       0  postgres

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
00:40:27      113      1214    0.00    0.00    0.00    0.00    0.00     0      0.00      0.00  354112  220940  24.18      0.00      0.00      0.00       0  postgres
....

ofc, if there is a genuine leak , then there might be more digging needed Finding memory leaks in Postgres C code (enterprisedb.com) 
just kill the process requesting more mem than available  Memory context: how PostgreSQL allocates memory - CYBERTEC (cybertec-postgresql.com)


--

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux