Re: 15,000 tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 1 Dec 2005, Craig A. James wrote:

So say I need 10,000 tables, but I can create tablespaces. Wouldn't that solve the performance problem caused by Linux's (or ext2/3's) problems with large directories?

For example, if each user creates (say) 10 tables, and I have 1000 users, I could create 100 tablespaces, and assign groups of 10 users to each tablespace. This would limit each tablespace to 100 tables, and keep the ext2/3 file-system directories manageable.

Would this work?  Would there be other problems?

This would definantly help, however there's still the question of how large the tables get, and how many total files are needed to hold the 100 tables.

you still have the problem of having to seek around to deal with all these different files (and tablespaces just spread them further apart), you can't solve this, but a large write-back journal (as opposed to metadata-only) would mask the problem.

it would be a trade-off, you would end up writing all your data twice, so the throughput would be lower, but since the data is safe as soon as it hits the journal the latency for any one request would be lower, which would allow the system to use the CPU more and overlap it with your seeking.

David Lang


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux