On Tue, Oct 17, 2017 at 1:22 PM, Günce Kaya <guncekaya14@xxxxxxxxx> wrote: > Hi Michael, > > Inserting bulk data is performed by authorised users and cron all day. There > are some multiple bulk inserts and some of them has dblink(It increase IO > during bulk insert) and there are around 30 tables feed from this bulk > insert. Bulk insert is run only one time for each table in a day. Not insert > millions of rows to all these tables but at least insert millions of data to > around 10 table like this. > > I also agree with you for your last paragraph, I just want to know people > how experienced similar scenario or what do you think about that. Generally speaking making autovacuum more aggressive is the best option if you're not sure. Waiting to vacuum can cause issues with bloat getting out of hand, and once a table is bloated the only fix is a blocking operation like vacuum full or cluster on index. Autovacuum taking a while to run is fine as long as it's keeping up with the deleted / dead tuples. I would recommend installing the check_postgres script from here: https://bucardo.org/check_postgres/ to keep track of your bloat. As long as it's staying fairly static and not growing you're likely fine. If it's growing then either make autovacuum more aggressive or run manual vacuums on the bloated tables. The real danger is hitting wrap around. PostgreSQL warns you and runs autovacuums whether or not they're turned off when it detects that a table is approaching this point. If you manage to hit wrap around on a busy database, pgsql will stop accepting connections and force you to run vacuum in single user mode, which is a pain as it takes down your whole cluster while you're doing that. -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin