(2013/08/06 19:33), Andres Freund wrote:
On 2013-08-06 19:19:41 +0900, KONDO Mitsumasa wrote:
(2013/08/05 21:23), Tom Lane wrote:
Andres Freund <andres@xxxxxxxxxxxxxxx> writes:
... Also, there are global
limits to the amount of filehandles that can simultaneously opened on a
system.
Yeah. Raising max_files_per_process puts you at serious risk that
everything else on the box will start falling over for lack of available
FD slots.
Is it Really? When I use hadoop like NOSQL storage, I set large number of FD.
Actually, Hadoop Wiki is writing following.
http://wiki.apache.org/hadoop/TooManyOpenFiles
Too Many Open Files
You can see this on Linux machines in client-side applications, server code or even in test runs.
It is caused by per-process limits on the number of files that a single user/process can have open, which was introduced in the 2.6.27 kernel. The default value, 128, was chosen because "that should be enough".
The first paragraph (which you're quoting with 128) is talking about
epoll which we don't use. The second paragraph indeed talks about the
max numbers of fds. Of *one* process.
Yes. I have been already understood like that.
Postgres uses a *process* based model. So, max_files_per_process is about
the the number of fds in a single backend. You need to multiply it by
max_connections + a bunch to get to the overall number of FDs.
Yes, too. I think max_file_per_process seems too small. In NoSQL, it was
recommended large number of FD. However, I do not know whether it is really
enough in PostgreSQL. If we use PostgreSQL with big data, we might need to change
max_file_per_process, I think.
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general