On 1/27/20 10:05 AM, Willy-Bas Loos wrote:
Hi,
We have a server with postgresql 9.4.12 on ubuntu.
There has been a sudden rise in the amount of disk space used by
postgresql, causing a diskspace error:
2020-01-22 17:24:37 CET db: ip: us: PANIC: could not write to file
"pg_xlog/xlogtemp.23346": No space left on device
2020-01-22 17:24:37 CET db: ip: us: LOG: WAL writer process (PID 23346)
was terminated by signal 6: Aborted
2020-01-22 17:24:37 CET db: ip: us: LOG: terminating any other active
server processes
The disk was at roughly 75% before and something or someone added 150 GB
to the database, bringing the disk space usage to 100%.
The query that got the initial error was creating a rather large table,
but it is not confirmed that this is the only source of the large-ish data
amount. But it is possible.
Now i can see in pg_stat_database and postgresql/9.4/main/base/pgsql_tmp
that there is 90GB of temporary files in the database.
Could the amount of temp files be caused by the unfinished query? I'm not
sure how strong Signal 6 is exactly.
And also: How can i make postgres clean up the files?
Can it be done without restarting the cluster?
Will restarting it help?
Restarting postgresql clears out pgsql_tmp. "pg_ctl restart -D ..." should
be all you need.
--
Angular momentum makes the world go 'round.