Re: max_files_per_process limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"=?ISO-8859-1?Q?Dilek_K=FC=E7=FCk?=" <dilekkucuk@xxxxxxxxx> writes:
> We have a database of about 62000 tables (about 2000 tablespaces) with an
> index on each table. Postgresql version is 8.1.

You should probably rethink that schema.  A lot of similar tables can be
folded into one table with an extra key column.  Also, where did you get
the idea that 2000 tablespaces would be a good thing?  There's really no
point in more than one per spindle or filesystem.

> Although after the initial inserts to about 32000 tables the subsequent
> inserts are considerable fast, subsequent inserts to more than 32000 tables
> are very slow.

This has probably got more to do with inefficiencies of your filesystem
than anything else --- did you pick one that scales well to lots of
files per directory?

> This seems to be due to the datatype (integer) of max_files_per_process
> option in the postgres.conf file which is used to set the maximum number of
> open file descriptors.

It's not so much the datatype of max_files_per_process as the datatype
of kernel file descriptors that's the limitation ...

			regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux