Jan Wieck <JanWieck@xxxxxxxxx> writes: > On 7/28/2005 2:28 PM, Tom Lane wrote: >> Jan Wieck <JanWieck@xxxxxxxxx> writes: >>> PostgreSQL itself doesn't work too well with tens of thousands of >>> tables. >> >> Really? AFAIK it should be pretty OK, assuming you are on a filesystem >> that doesn't choke with tens of thousands of entries in a directory. >> I think we should put down a TODO item to see if we can improve the >> stats subsystem's performance in such cases. > Okay, I should be more specific. The problem with tens of thousands of > tables does not exist just because of them being there. It will emerge > if all those tables are actually used because it will mean that you'd > need all the pg_class and pg_attribute rows cached and also your vfd > cache will constantly rotate. Sure, if you have a single backend touching all tables you'll have some issues in that backend. But the stats problem is that it tracks every table anyone has ever touched, which makes the issue much more pressing. > Then again, the stats file is only written. There is nothing that > actually forces the blocks out. On a busy system, one individual stats > file will be created, written to, renamed, live for 500ms and be thrown > away by the next stat files rename operation. I would assume that with a > decent filesystem and appropriate OS buffers, none of the data blocks of > most stat files even hit the disk. I must be missing something. This is possibly true --- Phil, do you see actual disk I/O happening from the stats writes, or is it just kernel calls? regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your message can get through to the mailing list cleanly