Re: max_files_per_process limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mon, Nov 10, 2008 at 5:24 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
"=?ISO-8859-1?Q?Dilek_K=FC=E7=FCk?=" <dilekkucuk@xxxxxxxxx> writes:
> We have a database of about 62000 tables (about 2000 tablespaces) with an
> index on each table. Postgresql version is 8.1.

You should probably rethink that schema.  A lot of similar tables can be
folded into one table with an extra key column.  Also, where did you get
the idea that 2000 tablespaces would be a good thing?  There's really no
point in more than one per spindle or filesystem.

> Although after the initial inserts to about 32000 tables the subsequent
> inserts are considerable fast, subsequent inserts to more than 32000 tables
> are very slow.

This has probably got more to do with inefficiencies of your filesystem
than anything else --- did you pick one that scales well to lots of
files per directory?
 
The database is working on FreeBSD 6.3 with UFS file system. It has 32 GB of RAM with 2 quadcore Intel Xeon 2.66 GHz processor, and about 11 TB of RAID5 storage.
 


> This seems to be due to the datatype (integer) of max_files_per_process
> option in the postgres.conf file which is used to set the maximum number of
> open file descriptors.

It's not so much the datatype of max_files_per_process as the datatype
of kernel file descriptors that's the limitation ...
 
We do not get any system messages related to the kernel file descriptor limit (like file: table is full) yet we will work again on both the database schema (tablespaces etc.) and system kernel variables.
 
Thanks,
Dilek Küçük
 


                       regards, tom lane


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux