So, I've run a number of PG databases for a number of years.. and I've now run into something I've never seen before.... in the most trivial of places. A couple of months back my girlfriend installed a music player called amarok on her system... I don't know much about it, but it stores its metadata in a backend database (postgresql here). I recently noticed that this database has grown to a huge size. ... Which I found to be somewhat odd because none of the tables have more than around 1000 rows. I hadn't been vacuuming because I didn't think that anything would ever be deleted.... so I performed a vacuum full... but no luck, it was still about 6.4GB. With some help of the folks on IRC I discovered... postgres=# select relname, pg_relation_size(oid) FROM pg_class ORDER BY 2 DESC LIMIT 2; relname | pg_relation_size -----------------------------+------------------ pg_shdepend_depender_index | 159465472 pg_shdepend_reference_index | 97271808 (2 rows) The pg_shdepend table has only about 50 rows.. why doesn't vacuum shrink the indexes? I understand that I can take the database into single user mode and reindex that table... In fact I could just drop the whole database, since the application can just rebuild it.... So my primary concern isn't fixing this, as I'm pretty sure I'll have no problem fixing it. I'd just like to know why it got into this state, and make sure there isn't some PG bug here worthy of exploration.