I have a table that is triggering my nagios database bloat alert regularly. Usually, I have to give it the vacuum full more than once to get it under the threshold. Today I tried repeatedly and cannot get the alert to resolve.
I had a discussion with one of the primary developers about how the table is utilized, and it turns out they are basically only ever inserting into it. This sort of flies in the face of conventional wisdom about bloat being caused by frequent updates and deletes.
We were looking at it, and one of the things that struck me is that this table has a column with a json datatype. I looked through information_schema.columns and there is only one other table with a json datatype, and I recall having bloat issues with this table in the past as well.
I'm wondering if the json datatype is just naturally more bloated than other types, or if the query in the check_postgresql.pl nagios script is not accurate, or if maybe my thresholds are simply too low?
The table design itself is pretty simple:
id | integer | not null default nextval('table_schema.table_name_id_seq'::regclass)
type | character varying(255) |
criteria | json |
created_at | timestamp without time zone | not null
updated_at | timestamp without time zone | not null
Indexes:
"table_name_pkey" PRIMARY KEY, btree (id)
The nagios output looks like this (sanitized):
POSTGRES_BLOAT WARNING: DB "db_name" (host:host.domain.com) (db db_name) table table_schema.table_name rows:29305 pages:1733 shouldbe:330 (5.3X) wasted size:11493376 (10 MB)
POSTGRES_BLOAT WARNING: DB "db_name" (host:host.domain.com) (db db_name) table table_schema.table_name rows:29305 pages:1733 shouldbe:330 (5.3X) wasted size:11493376 (10 MB)
Thoughts?
Greg Haase