On 7/28/2005 2:28 PM, Tom Lane wrote:
Jan Wieck <JanWieck@xxxxxxxxx> writes:
On 7/28/2005 2:03 PM, Tom Lane wrote:
Well, there's the problem --- the stats subsystem is designed in a way
that makes it rewrite its entire stats collection on every update.
That's clearly not going to scale well to a large number of tables.
Offhand I don't see an easy solution ... Jan, any ideas?
PostgreSQL itself doesn't work too well with tens of thousands of
tables.
Really? AFAIK it should be pretty OK, assuming you are on a filesystem
that doesn't choke with tens of thousands of entries in a directory.
I think we should put down a TODO item to see if we can improve the
stats subsystem's performance in such cases.
Okay, I should be more specific. The problem with tens of thousands of
tables does not exist just because of them being there. It will emerge
if all those tables are actually used because it will mean that you'd
need all the pg_class and pg_attribute rows cached and also your vfd
cache will constantly rotate.
Then again, the stats file is only written. There is nothing that
actually forces the blocks out. On a busy system, one individual stats
file will be created, written to, renamed, live for 500ms and be thrown
away by the next stat files rename operation. I would assume that with a
decent filesystem and appropriate OS buffers, none of the data blocks of
most stat files even hit the disk. I must be missing something.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@xxxxxxxxx #
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your
message can get through to the mailing list cleanly