Re: 15,000 tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 1 Dec 2005, Michael Riess wrote:

Hi,

we are currently running a postgres server (upgraded to 8.1) which has one large database with approx. 15,000 tables. Unfortunately performance suffers from that, because the internal tables (especially that which holds the attribute info) get too large.

is it becouse the internal tables get large, or is it a problem with disk I/O?

with 15,000 tables you are talking about a LOT of files to hold these (30,000 files with one index each and each database being small enough to not need more then one file to hold it), on linux ext2/3 this many files in one directory will slow you down horribly. try different filesystems (from my testing and from other posts it looks like XFS is a leading contender), and also play around with the tablespaces feature in 8.1 to move things out of the main data directory into multiple directories. if you do a ls -l on the parent directory you will see that the size of the directory is large if it's ever had lots of files in it, the only way to shrink it is to mv the old directory to a new name, create a new directory and move the files from the old directory to the new one.

David Lang



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux