Everything was working as expected for the first 2 days (with a 5 minute auto checkpoint time the log contains the expected 288 restart points). And if I manually requested a checkpoint the expected log entry would be entered
LOG: restartpoint starting: immediate force wait
I noticed on our monitoring system that the xlog storage was beginning to grow on the slave I went to check the log file and the last checkpoint that happened was started at 8:25 and finished at 8:29 (our .8 completion_target)
2012-08-16 08:25:20.004 MDT [24039]: [2565-1] LOG: restartpoint starting: time
2012-08-16 08:29:20.240 MDT [24039]: [2566-1] LOG: restartpoint complete: wrote 6943 buffers (0.7%); 0 transaction log file(s) added, 0 removed, 7 recycled; write=240.135 s, sync=0.098 s, total=240.236 s; sync files=1488, longest=0.025 s, average=0.000 s
2012-08-16 08:29:20.240 MDT [24039]: [2567-1] LOG: recovery restart point at 1A0/8B8DF820
2012-08-16 08:29:20.240 MDT [24039]: [2568-1] DETAIL: last completed transaction was at log time 2012-08-16 08:29:19.94531-06
Now if I manually try to trigger a checkpoint no log message shows up.
So it seems like whatever runs the checkpoints has frozen up and is no longer responding.
The only information in the log on the slave are the checkpoints from the midnight log rotation to the last record at 8:29, and a couple slow query logs from a nightly dataload to our data wharehouse that runs overnight. There are no regular querying happening against the slave other then the DW load which finishes around 4 AM.
The logging itself does appear to still be working as if I send a invalid query against the system it does show up in the log.
I check dmesg and /var/log/messages and there are no IO errors or any other messages.
Any one have a idea of where I should go from here?
Thanks
On Mon, Aug 13, 2012 at 12:14 PM, Mike Roest <mike.roest@xxxxxxxxxxxx> wrote:
Hey Everyone,With my investigations into our streaming slave having a continuously growing pg_xlog directory (http://archives.postgresql.org/pgsql-general/2012-08/msg00124.php) I think I've hit upon what's going on.Using pg_controldata I've found that our streaming slave hasn't run a checkpoint for over 2 weeks. Or I suppose more specifically hasn't completed a checkpoint in 2 weeks. The server was synced and started up on the 27th so there was 4 days where it looks like the checkpoints where happening correctly. I have attempted a manual checkpoint on the slave with no effect (it returns instantly to me in the client but the pg_controldata never shows any change). On my testing system the automatic checkpoint runs successfully every 5 minutes and removes the wal logs it no longer needs to get to that restartpoint.pg_control version number: 903Catalog version number: 201105231Database system identifier: 5680552055141687295Database cluster state: in archive recoverypg_control last modified: Tue 31 Jul 2012 12:43:59 PM MDTLatest checkpoint location: 178/5344C730Prior checkpoint location: 178/4E8C1938Latest checkpoint's REDO location: 178/4FC5E8E0Latest checkpoint's TimeLineID: 10Latest checkpoint's NextXID: 0/291590955Latest checkpoint's NextOID: 67986575Latest checkpoint's NextMultiXactId: 122827Latest checkpoint's NextMultiOffset: 296304Latest checkpoint's oldestXID: 64713442Latest checkpoint's oldestXID's DB: 16408Latest checkpoint's oldestActiveXID: 260568245Time of latest checkpoint: Tue 31 Jul 2012 12:35:53 PM MDTMinimum recovery ending location: 199/EC6C0BA0Backup start location: 0/0Current wal_level setting: hot_standbyCurrent max_connections setting: 600Current max_prepared_xacts setting: 0Current max_locks_per_xact setting: 256Maximum data alignment: 8Database block size: 8192Blocks per segment of large relation: 131072WAL block size: 8192Bytes per WAL segment: 16777216Maximum length of identifiers: 64Maximum columns in an index: 32Maximum size of a TOAST chunk: 1996Date/time type storage: 64-bit integersFloat4 argument passing: by valueFloat8 argument passing: by valueCompared to our masterpg_control version number: 903Catalog version number: 201105231Database system identifier: 5680552055141687295Database cluster state: in productionpg_control last modified: Mon 13 Aug 2012 11:35:35 AM MDTLatest checkpoint location: 199/EC5E4218Prior checkpoint location: 199/EBEFB120Latest checkpoint's REDO location: 199/EC033110Latest checkpoint's TimeLineID: 10Latest checkpoint's NextXID: 0/291590955Latest checkpoint's NextOID: 75744399Latest checkpoint's NextMultiXactId: 122877Latest checkpoint's NextMultiOffset: 296404Latest checkpoint's oldestXID: 91782681Latest checkpoint's oldestXID's DB: 16408Latest checkpoint's oldestActiveXID: 291583665Time of latest checkpoint: Mon 13 Aug 2012 11:31:35 AM MDTMinimum recovery ending location: 0/0Backup start location: 0/0Current wal_level setting: hot_standbyCurrent max_connections setting: 600Current max_prepared_xacts setting: 0Current max_locks_per_xact setting: 256Maximum data alignment: 8Database block size: 8192Blocks per segment of large relation: 131072WAL block size: 8192Bytes per WAL segment: 16777216Maximum length of identifiers: 64Maximum columns in an index: 32Maximum size of a TOAST chunk: 1996Date/time type storage: 64-bit integersFloat4 argument passing: by valueFloat8 argument passing: by valueWhich appears to correctly be running it's automated checkpoints every 5 minutes. Our checkpoint configuration is currently:checkpoint_completion_target | 0.8 | Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval.checkpoint_segments | 128 | Sets the maximum distance in log segments between automatic WAL checkpoints.checkpoint_timeout | 5min | Sets the maximum time between automatic WAL checkpoints.checkpoint_warning | 30s | Enables warnings if checkpoint segments are filled more frequently than this.The settings are the same on both the slave and the master.We don't currently have log_checkpoints enabled on our slave (I'm going to make the change and schedule a resync for this evening to see if the log reveals anything)I've done some googling and can't really see anything that will allow me to see the status of the checkpoints in progress currently.Just for more info we're running postgres 9.1.1 build from source on 2 X64 Cent OS5.8 machines. The total size of our /data directory is currently 94.5 gigs which is currently all in 1 DB with 2193 schemas (each of which has around 1300 objects).. From the what I'm able to pull of the slave there isn't constant IO happening like the checkpoints are running and just thrashing the disks. If there's any other information that may be helpful I can pull it.Anyone have any suggestions as to next roads to take (besides turning on the checkpoint logging and trying to see what's going on)?Thanks