I'm not an expert.
Turn off tab completion? It's probably scanning through all the possible table names and the algorithm used is probably not designed for that number. And with 42000 tables, tab completion may not be that helpful.
Don't use ext2/ext3? There are other filesystems on Linux which perform decently with thousands of files in a directory. AFAIK ext2 and ext3 don't allow you to have single large files anyway - also not sure if postgresql BLOBs will hit those filesystem limits or postgresql splits BLOBs or hits its own limits first - I'd just store multi-GB stuff out of the DB.
At 01:24 PM 2/20/2005 +0000, Phil Endecott wrote:
Dear Postgresql experts,
I have a single database with one schema per user. Each user has a handful of tables, but there are lots of users, so in total the database has thousands of tables.
I'm a bit concerned about scalability as this continues to grow. For example I find that tab-completion in psql is now unusably slow; if there is anything more important where the algorithmic complexity is the same then it will be causing a problem. There are 42,000 files in the database directory. This is enough that, with a "traditional" unix filesystem like ext2/3, kernel operations on directories take a significant time. (In other applications I've generally used a guide of 100-1000 files per directory before adding extra layers, but I don't know how valid this is.)
---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives?
http://archives.postgresql.org