On Fri, Dec 17, 2010 at 5:22 PM, Gauthier, Dave <dave.gauthier@xxxxxxxxx> wrote: > max_fsm_pages = 200000 > max_fsm_relations = 12000 > > There are 12 DBs with roughly 30 tables+indexes each. > > There are apparently 2 "bad" DBs. Both identical in terms of data models (clones with different data). I've pg_dummped one of them to a file, dropped the DB (took a long time as millions of files were deleted) and recreated it. It now has 186 files. > > ls -1 | wc took a while for the other bad one but eventually came up with exactly 7,949,911 files, so yes, millions. The other one had millions too before I dropped it. Something is clearly wrong. But, since the DB recreate worked for the other one, I'll do the same thing to fix this one too. > > What I will need to know then is how to prevent this in the future. It's very odd because the worst of the 2 bad DBs was a sister DB to one that's no problem at all. Here's the picture... > > I have a DB, call it "foo", that gets loaded with a ton of data at night. The users query the thing readonly all day. At midnight, an empty DB called "foo_standby", which is identical to "foo" in terms of data model is reloaded from scratch. It takes hours. But when it's done, I do a few rename databases to swap "foo" with "foo_standby" (really just a name swap). "foo_standby" serves as a live backup of yesterday's data. Come the next midnight, I truncate all the tables and start the process all over again. maybe something in this process is leaking files. if I was in your shoes, I'd recreate the database from scratch, then watch the file count carefully and look for unusual growth. this is probably not the case, but if it is in fact a backend bug it will turn up again right away. anything else interesting jump out about these files? for example, are there a lot of 0 byte files? merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general