"Mark Liberman" <mliberman@xxxxxxxxxxxxxx> writes: > where c.oid = a.attrelid > and c.relnamespace = n.oid > and c.relkind in ('i','r') > and a.atttypid not in (26,27,28,29) > group by c.relname) by_table; > A few notes: > 1) I have used 32 bytes for the row tuple header overhead and 4 bytes = > for index tuple overhead > 2) The attribute types 26,27,28,29 are oid,tid,xid,cid - which, I = > believe are already counted in the row overhead You should not do it that way, because those are perfectly valid datatypes for user columns. Instead of the type test, check for attnum > 0. The "system columns" that represent row overhead items have attnum < 0. You might want to consider ignoring columns where attisdropped, too, though this is a bit of a judgment call since a dropped column might still be eating storage space. Another thing you could do is left-join to pg_stats and use ANALYZE's estimate of average column width where available, instead of hardwired guesses. Another important point is that this calculation is ignoring TOAST space ... do you have any columns wide enough to get toasted? > 1) I have found the 32 bytes overhead mentioned in a few places, but = > have not seen any specific reference to the byte overhead of an index = > header row. Does know the best number to use here for an assumption? 12 bytes (8-byte header + 4-byte line pointer). > 3) Has anyone solved this entire problem in another fashion (e.g. = > auto-pruning - only delete what's necessary to stay within limits). Have you looked at contrib/pgstattuple? regards, tom lane