Jan Wieck <JanWieck@xxxxxxxxx> writes: > >> PostgreSQL itself doesn't work too well with tens of thousands of tables. > > Really? AFAIK it should be pretty OK, assuming you are on a filesystem > > that doesn't choke with tens of thousands of entries in a directory. > > I think we should put down a TODO item to see if we can improve the > > stats subsystem's performance in such cases. > > Okay, I should be more specific. The problem with tens of thousands of tables > does not exist just because of them being there. It will emerge if all those > tables are actually used because it will mean that you'd need all the pg_class > and pg_attribute rows cached and also your vfd cache will constantly rotate. I think occasionally people get bitten by not having their pg_* tables being vacuumed or analyzed regularly. If you have lots of tables and the stats are never updated for pg_class or related tables you can find the planner taking a long time to plan queries. This happens if you schedule a cron job to do your vacuuming and analyzing but connect as a user other than the database owner. For example, you leave the database owned by "postgres" but create a user to own all the tables and use that to run regularly scheduled "vacuum analyze"s. I'm not sure how often these types of problems get properly diagnosed. The symptoms are quite mysterious. In retrospect I think I observed something like it and never figured out what was going on. The problem only went away when I upgraded the database and went through an initdb cycle. -- greg ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org