Search Postgresql Archives

Re: Unable to Vacuum Large Defragmented Table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





po 8. 4. 2019 v 17:22 odesílatel Igal Sapir <igal@xxxxxxxxx> napsal:
Pavel,

On Sun, Apr 7, 2019 at 11:22 PM Pavel Stehule <pavel.stehule@xxxxxxxxx> wrote:

po 8. 4. 2019 v 7:57 odesílatel Igal Sapir <igal@xxxxxxxxx> napsal:
David,

On Sun, Apr 7, 2019 at 8:11 PM David Rowley <david.rowley@xxxxxxxxxxxxxxx> wrote:
On Mon, 8 Apr 2019 at 14:57, Igal Sapir <igal@xxxxxxxxx> wrote:
> However, I have now deleted about 50,000 rows more and the table has only 119,688 rows.  The pg_relation_size() still shows 31MB and pg_total_relation_size() still shows 84GB.
>
> It doesn't make sense that after deleting about 30% of the rows the values here do not change.

deleting rows does not make the table any smaller, it just creates
dead rows in the table.  VACUUM tries to release the space used by
those dead rows and turns it back into free space.  Normal vacuum (not
FULL) can only shrink the table if completely empty pages are found at
the end of the table.

ACK
 

> Attempting to copy the data to a different table results in the out of disk error as well, so that is in line with your assessment.  But it actually just shows the problem.  The new table to which the data was copied (though failed due to out of disk) shows 0 rows, but  pg_total_relation_size() for that table shows 27GB.  So now I have an "empty" table that takes 27GB of disk space.

I think the best thing to do is perform a normal VACUUM on the table

Running VACUUM on the newly created table cleared the 27GB so that's good (I was planning to run normal VACUUM but ran FULL).

you can drop some indexes, then you can run vacuum full, and create dropped indexes again.

The table does not have any indexes.  It is mostly an append-only table.
 


 
then CREATE EXTENSION IF NOT EXISTS pgstattuple; then do; SELECT *
FROM pgstattuple('<tablename>); and the same again on the toast table.
If your table still contains many dead rows then perhaps an open
transaction is stopping rows from being turned into free space.

I am not sure how to read the below.  I see a lot of "free_space" but not "dead":

-[ RECORD 1 ]------+------------
?column?           | primary
table_len          | 32055296
tuple_count        | 120764
tuple_len          | 9470882
tuple_percent      | 29.55
dead_tuple_count   | 0
dead_tuple_len     | 0
dead_tuple_percent | 0
free_space         | 20713580
free_percent       | 64.62
-[ RECORD 2 ]------+------------
?column?           | toast
table_len          | 88802156544
tuple_count        | 15126830
tuple_len          | 30658625743
tuple_percent      | 34.52
dead_tuple_count   | 0
dead_tuple_len     | 0
dead_tuple_percent | 0
free_space         | 57653329312
free_percent       | 64.92


it say, so your table can be reduced about 60%

That's what I thought, and releasing 65% of 84GB would be major here, but unfortunately I am unable to release it because VACUUM FULL requires more space than I currently have available.

Perhaps disabling the WAL, if possible, could help VACUUM FULL complete.  Or some way to do an in-place VACUUM so that it doesn't write all the data to a new table.

maybe this article can be interesting for you

https://www.depesz.com/2013/06/21/bloat-removal-by-tuples-moving/





Thank you,

Igal

 


 
Once pgstattuples reports that "tuple_len" from the table, its toast table
and all its indexes has been reduced to an acceptable value then you
should try a VACUUM FULL.  Remember that VACUUM FULL must also write
WAL, so if WAL is on the same volume, then you'll need to consider
space required for that when deciding how much data to remove from the
table.

WAL is on the same volume.  The PGDATA directory is mounted in a Docker container.

Isn't there any way to do an in-place VACUUM or pause the WAL at the risk of losing some data if recovery is required?

There is a catch-22 here.  I can't reclaim the disk space because that requires disk space.  Surely I'm not the first one to have encountered that problem with Postgres.
 

> This is mostly transient data, so I don't mind deleting rows, but if some day this could happen in production then I have to know how to deal with it without losing all of the data.

For the future, it would be better to delete more often than waiting
until the table grows too large.  A normal VACUUM will turn space used
by dead tuples back into free space, so if done often enough there
won't be a need to vacuum full.

ACK.  This issue came up while implementing a retention policy that will be enforced regularly.

Thank you for all of your help,

Igal
 

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux