Hi, I am working in a Project who use PostgreSQL, and our application crash time to time due a lack of space because the px_xlog has consume all the available space. I have discarded a problem with the archiving process, and the main problem appear to be related with the amount of WAL files created per minute. I can show that we are creating more than two WAL files per minute like
you can see: [DEV] [root@ip-10-175-17-84 /data/node1/pg_xlog]# ls -lrt total 82208 -rw-------. 1 postgres postgres 16777216 mar 10 14:15 000000010000001C000000D7 -rw-------. 1 postgres postgres 16777216 mar 10 14:15 000000010000001C000000D8 -rw-------. 1 postgres postgres 16777216 mar 10 14:15 000000010000001C000000D9 -rw-------. 1 postgres postgres 16777216 mar 10 14:16 000000010000001C000000DA drwx------. 2 postgres postgres 286720 mar 10 14:16 archive_status -rw-------. 1 postgres postgres 16777216 mar 10 14:17 000000010000001C000000DB But I can identify the transaction responsible for this growing. WAL parameters with non-default values in place in the database are: wal_level = 'archive' archive_mode = on archive_command = '/usr/bin/true' In order to try to identify the transaction responsible for this behavior I have try to follow two approach: 1º Increase database login to “all”, I collect 97.906 selects in two minutes but no insert/update/delete, log parameters in use has been: log_destination = 'syslog,csvlog' logging_collector = on # Enable capturing of stderr and csvlog log_directory = 'pg_log' # directory where log files are written, log_filename = 'postgresql-%Y-%m-%d.log' log_line_prefix = '< %m >' # special values: log_timezone = 'Europe/Madrid' log_statement = 'ddl' an example of the selct capture could be: 2017-03-10 13:59:57.319 CET,"app","app",11919,"10.175.17.84:33895",58c2a02f.2e8f,1,"SELECT",2017-03-10 13:46:39 CET,4/4352895,0,LOG,00000,"execute S_1: SELECT id, locked_ts FROM table WHERE id = $1","parameters: $1 = 'TOPIC_CONSUMER'",,,,,,,,"" 2º Use the pg_xlogdump, and I gather something like that: rmgr: Heap len (rec/tot): 14/ 240, tx: 54704944, lsn: 1C/B9000038, prev 1C/B8FFFFD8, desc: UPDATE off 1 xmax 54704944 ; new off 13 xmax 0, blkref #0: rel 16431/598016/598033
blk 2709 rmgr: Btree len (rec/tot): 2/ 72, tx: 54704944, lsn: 1C/B9000128, prev 1C/B9000038, desc: INSERT_LEAF off 26, blkref #0: rel 16431/598016/598040 blk 3560 rmgr: Btree len (rec/tot): 2/ 72, tx: 54704944, lsn: 1C/B9000170, prev 1C/B9000128, desc: INSERT_LEAF off 103, blkref #0: rel 16431/598016/598042 blk 727 rmgr: Btree len (rec/tot): 2/ 64, tx: 54704944, lsn: 1C/B90001B8, prev 1C/B9000170, desc: INSERT_LEAF off 268, blkref #0: rel 16431/598016/598043 blk 1441 rmgr: Transaction len (rec/tot): 8/ 34, tx: 54704944, lsn: 1C/B90001F8, prev 1C/B90001B8, desc: COMMIT 2017-03-10 13:58:57.494289 CET rmgr: Heap2 len (rec/tot): 8/ 56, tx: 0, lsn: 1C/B9000220, prev 1C/B90001F8, desc: CLEAN remxid 54702701, blkref #0: rel 16431/598016/598033 blk 15966 rmgr: Heap len (rec/tot): 14/ 241, tx: 54704945, lsn: 1C/B9000258, prev 1C/B9000220, desc: UPDATE off 33 xmax 54704945 ; new off 8 xmax 0, blkref #0: rel 16431/598016/598033
blk 15966 rmgr: Btree len (rec/tot): 2/ 3589, tx: 54704945, lsn: 1C/B9000350, prev 1C/B9000258, desc: INSERT_LEAF off 3, blkref #0: rel 16431/598016/598040 blk 1567 FPW rmgr: Btree len (rec/tot): 2/ 72, tx: 54704945, lsn: 1C/B9001158, prev 1C/B9000350, desc: INSERT_LEAF off 103, blkref #0: rel 16431/598016/598042 blk 727 rmgr: Btree len (rec/tot): 2/ 64, tx: 54704945, lsn: 1C/B90011A0, prev 1C/B9001158, desc: INSERT_LEAF off 269, blkref #0: rel 16431/598016/598043 blk 1441 rmgr: Transaction len (rec/tot): 8/ 34, tx: 54704945, lsn: 1C/B90011E0, prev 1C/B90011A0, desc: COMMIT 2017-03-10 13:58:57.546523 CET Any suggestion it is welcome to address this problem. Regards Gabriel |