Re: Enhancement Request

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





El vie, 2 feb 2024 14:54, Ron Johnson <ronljohnsonjr@xxxxxxxxx> escribió:
On Fri, Feb 2, 2024 at 3:50 AM Olivier Gautherot <ogautherot@xxxxxxxxxxxxx> wrote:


El jue, 1 feb 2024 2:35, Ron Johnson <ronljohnsonjr@xxxxxxxxx> escribió:

...

Deleting large numbers of rows is a complex task with a lot of hidden issues (index management between other things). Adding a LIMIT paradigm will not simplify it in any way.

Smaller "bites" are easier to manage than giant bites.

To some extent, yes. But when it comes to large quantities overall, you have to consider vacuum, and it's best to take the DB offline for that. It depends on your use case.

 
I remember doing it on tables with over 50 millions rows and had my share of disaster recoveries. Partitions saved my life.

You must have been doing something wrong.

The mistake was to hope for the best and it didn't happen: we didn't take the feeding process offline (over 1000 rows per minute) and, after 24 hour, the DB was still trying to recover. We finally took everything offline for 2 hours and it stabilized. The delete process involved chunks of 15 million rows at a time, worth 1 month of data - not a minor issue.

[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux