You likely need to tune your autovacuum settings such that the index bloat reaches a steady-state and does not continue to increase. When vacuum runs, it will remove dead (and no longer visible to any running transactions) tuples aka row versions from each page (8KB block by default) in the file for that table's data. It will also update the index, except in newer versions of Postgres where that behavior becomes optional (see manual for which version and whether it still runs by default). If you are I/O bound and cannot tune the system defaults to autovacuum more proactively (when a smaller percent of rows are dead), then perhaps just change the settings for that table as it seems to be functioning a bit like a queue. Or you might consider setting up a daily job to vacuum analyze on all tables, if there is a period of low system activity. If you didn't have the index on the columns you are updating, then reducing fillfactor would be an option to increase HOT updates and thereby prevent the index bloat. Alas, that's not an option with the index needing to reference those values that are being changed.
"index002" btree (action_performed, should_still_perform_action, action_performed_at DESC) WHERE should_still_perform_action = false AND action_performed = true
That index seems odd to me. Why index a single value for the first two columns? I would recreate it with those skipped. Also, based on the names, I'd expect the opposite for true and false. That is, the "interesting" rows are where the action has NOT yet been performed yet and it is needed. I'd expect the index as defined to cover most of the table rather than a small fraction. Perhaps just a typo from manual obfuscation.For what it's worth, you can create new concurrently, drop old concurrently, then rename new to old. That would be the same result as a reindex concurrently.