Search Postgresql Archives

Re: Scalability with large numbers of tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christopher Browne wrote:
Oops! spam_from_postgresql_general@xxxxxxxxxxxx (Phil Endecott) was seen spray-painting on a wall:
I have a single database with one schema per user.  Each user has a
handful of tables, but there are lots of users, so in total the
database has thousands of tables.

If you've got tens of thousands of relations, the tab completion code
has to draw the whole list of relations from pg_class into memory and
"marshal" it into a form usable by GNU Readline.  THAT is what you're
seeing slow down.  As the number of tables, n, grows, the cost of that
grows with order of complexity O(n).

Actual queries on actual tables won't be slow; they will look up
relation names directly in pg_class, and presumably go from there to
get the file name(s) on the filesystem, which each represent
operations of complexity of order O(n log n).  Which remains fast even
if there are millions of tables.

I guess you mean O(log n) in the second paragraph (Which would imply that there is an index on relname for pg_class). If the complexity was O(n log n), it would be more complex/slower than an O(n) algorithm, and
therefore slower (or, at least, it would scale worse) than the tab-completion code ;-)


greetings, Florian Pflug

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux