>> Why aren't we more opportunistic about freezing tuples? For instance, if >> we already have a dirty buffer in cache, we should be more aggressive >> about freezing those tuples than freezing tuples on disk. > > The most widely cited reason is that you lose forensics data. Although > they are increasingly rare, there are still situations in which the heap > tuple machinery messes up and the xmin/xmax/etc fields of the tuple are > the best/only way to find out what happened and thus fix the bug. If > you freeze early, there's just no way to know. That argument doesn't apply. If the page is in memory and is being written anyway, and some of the rows are past vacuum_freeze_min_age, then why not freeze them rather than waiting for a vacuum process to read them off disk and rewrite them? We're not talking about freezing every tuple as soon as it's out of scope. Just the ones which are more that 100m (or whatever the setting is) old. I seriously doubt that anyone is doing useful forensics using xids which are 100m old. -- Josh Berkus PostgreSQL Experts Inc. www.pgexperts.com -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance