On Wed, 5 Jul 2017 07:18:03 +0200 Chris Travers <chris.travers@xxxxxxxxx> wrote: > Hi; > > First, I haven't seen major problems of database bloat in a long time which > is why I find this case strange. I wanted to ask here what may be causing > it. > > Problem: > ========== > Database is in the 100GB to 200GB size range, running on btrfs (not my > choice) with nodatacow enabled (which I set up to fix a performance > issue). The workload is a very heavy batch-update workload. > > The database bloats linearly. I have measured this on one table (of 149M > rows). > > After vacuum full this table is (including indexes): 17GB > Every 24 hrs, seems to add its original space in size to the file system > +/-. > > Bloat seems to be affecting both indexes and underlying tables. > > Vacuum verbose does not indicate a disproportionate number of rows being > unremovable. So autovacuum is keeping up without too much difficulty. > > > Troubleshooting so far > ======================= > > filefrag finds a single extent on each file, so copy-on-write is not the > culprit > > Selecting the smallest 10 values of ctid from one of the bloating tables > shows the first page used is around page 35 with one row per used page (and > large gaps in between). > > Questions > =========== > I assume that it is the fact that rows update frequently which is the > problem here? But why doesn't Postgres re-use any of the empty disk pages? > > More importantly, is there anything that can be done to mitigate this issue > other than a frequent vacuum full? 2x the working size for a frequently updated table isn't terrible bloat. Or are you saying it grows 2x every 24 hours and keeps growing? The real question is how often the table is being vacuumed. How long have you let the experiment run for? Does the table find an equilibrium size where it stops growing? Have you turned on logging for autovacuum to see how often it actually runs on this table? No unremovable rows does not indicate that autovaccum is keeping up. It just indicates that you don't have a problem with uncommitted transactions holding rows for long periods of time. Have you looked at tuning the autovacuum parameters for this table? More frequent vacuums should keep things more under control. However, if the write load is heavy, you'll probably want to lower autovacuum_vacuum_cost_delay. Personally, I feel like the default value for this should be 0, but there are likely those that would debate that. In any event, if that setting is too high it can cause autovacuum to take so long that it can't keep up. In theory, setting it too low can cause autovaccum to have a negative performance impact, but I've never seen that happen on modern hardware. But that's all speculation until you know how frequently autovacuum runs on that table and how long it takes to do its work. -- PT <wmoran@xxxxxxxxxxxxxxxxx> -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general