Ive seen some performance degradation on some other RDBMS systems when "drop database" was in progress. We need to drop database which is 16 tb with minimal impact for our end users. There are 32 other databases with hundreds of connections on the same cluster,
and I just want to release the space with minimal impact. Trying to find the best solution. I could even script 'truncate table' or 'drop table' in the loop if it helps. I dont have luxury to test such large db drop in action.
Thanks!
From: Ron <ronljohnsonjr@xxxxxxxxx>
Sent: Thursday, October 17, 2019 1:59 PM To: pgsql-general@xxxxxxxxxxxxxxxxxxxx <pgsql-general@xxxxxxxxxxxxxxxxxxxx> Subject: Re: drop database On 10/17/19 3:44 PM, Julie Nishimura wrote:
A lot has to do with how quickly the underlying file system can delete files. To be honest, though... does it really matter how long it takes? (If I were worried about it -- which I might be -- then I'd put a DROP DATABASE script in crontab and run it from there.) --
Angular momentum makes the world go 'round. |