Updating a large table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello! We have a large table 11GB ( about 37 million records ) and we need to alter a table - add a new column with default values is false. Also 'NOT NULL' is required.
So, first I've done:

ALTER TABLE clusters ALTER COLUMN "is_paid";

after that:

UPDATE clusters SET is_paid = DEFAULT where ctime <= now() - interval '720h' AND is_paid != FALSE;

Everything went ok. Then I tried to run it again for an interval of 1 years. And I got that no one can't see - the was no available space on a disk. The reason was WAL-files ate everything.
Master-server couldn't send some WAL-file to their replicas. Bandwidth wasn't enough.

Well, I'm searching for a better idea to update the table.
Solutions I found.
1. Separate my UPDATE by chunks.
2. Alter a table using a new temporary table, but it's not convenient for me because there is a lot of foreign keys and indexes.
3. Hot-update. This is the most interesting case for me.
Speaking of HOT-update https://www.dbrnd.com/2016/03/postgresql-the-awesome-table-fillfactor-to-speedup-update-and-select-statement/
The article says: it might be useful for tables that change often and moreover It would be the best way to increase the speed of UPDATE.
So, my questions are will it work for all tuples? It says that - no https://www.dbrnd.com/2016/03/postgresql-alter-table-to-change-fillfactor-value/, but I could not find a confirmation in official postresql's documentation.
Why do I need to launch vacuum after updating?
How should I reduce the better fillfactor?
What will be with WAL-files it this case?
Thank you!



PostgreSQL 9.6

-- 
Timokhin 'maf' Maxim




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux