> On Mar 30, 2019, at 10:54 AM, Gmail <robjsargent@xxxxxxxxx> wrote:
>
>
>>>> On Mar 29, 2019, at 6:58 AM, Michael Paquier <michael@xxxxxxxxxxx> wrote:
>>>
>>> On Thu, Mar 28, 2019 at 09:53:16AM -0600, Rob Sargent wrote:
>>> This is pg10 so it's pg_wal. ls -ltr
>>>
>>>
>>> -rw-------. 1 postgres postgres 16777216 Mar 16 16:33
>>> 0000000100000CEA000000B1
>>> -rw-------. 1 postgres postgres 16777216 Mar 16 16:33
>>> 0000000100000CEA000000B2
>>>
>>> ... 217 more on through to ...
>>>
>>> -rw-------. 1 postgres postgres 16777216 Mar 16 17:01
>>> 0000000100000CEA000000E8
>>> -rw-------. 1 postgres postgres 16777216 Mar 16 17:01
>>> 0000000100000CEA000000E9
>>> -rw-------. 1 postgres postgres 16777216 Mar 28 09:46
>>> 0000000100000CEA0000000E
> I’m now down to 208 Mar 16 WAL files so they are being processed (at least deleted). I’ve taken a snapshot of the pg_wal dir such that I can see which files get processed. It’s none of the files I’ve listed previously
Two more have been cleaned up. 001C and 001D generated at 16:38 Mar 16
Please share your complete postgresql.conf file and the results from this query: SELECT * FROM pg_settings; has someone in the past configured wal archiving? You've ran out of disk space as this log message you shared states: No space left on device what's the output of df -h
--
BTW , how spread apart are checkpoints happening? do you have stats on that? maybe they're too spread apart and that's why WAL files cannot be recycled rapidly enough? --
two attempts (one in-line, one with attachement) at sending postgresql.conf and pg_settings report have been sent to a moderator.
As per your configuration : max_wal_size = 50GB
this seems to be the cause for the WAL files piling up.
this has been declared twice, the last one is taking effect.
That’s an interesting catch. Thank you. I’ll have that reverted that to default. Note that the WAL files are all the default 16M however. Currently we’re down to 88 Mar 16 WAL files. My inclination is to wait this out, to see if all of Mar 16 goes away quietly then reset our backups.
|