Re: MaxRequestsPerChild - New child process doesn't process requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 22, 2014 at 3:46 AM, Vattikuti, Vamsi Krishna Venkata (STSD) <vamsik@xxxxxx> wrote:

Hi,

 

Thanks for the feedback.

 

That tomcat application(monitoring) is invoked through a 3rd party module and  application remains active in browser for days together, which result in memory leak for httpd process. To prevent that leak, that application owner has configured MaxRequestsPerChild as 100. I will suggest them to increase values of this and other directives.


It is likely that you won't find a satisfactory solution as long as you have memory leaks in httpd and extremely-long-running requests.  The leak needs to be fixed.

I think you will be better off with the prefork MPM until this can be fixed:

* Since you have such a low MaxClients, you're not getting much benefit from the worker MPM.
* When a prefork child process reaches MaxRequestsPerChild, it can exit immediately since there are no other active requests in the child process, and not be stranded forever, consuming resources but not able to accept new connections.
* If necessary, a person or a monitoring script can kill individual prefork child processes based on time of active request or memory use, and only affect a single active request.

If this third-party module also requires exactly one child process, in addition to leaking memory and requiring child process cleanup, you've hit double jeopardy.  Switch to another line of work.

 

Regards,

Vamsi.

 

From: Daniel [mailto:dferradal@xxxxxxxxx]
Sent: Friday, August 22, 2014 12:50 PM


To: users@xxxxxxxxxxxxxxxx
Subject: Re: MaxRequestsPerChild - New child process doesn't process requests

 

What memory leak exactly? You are just proxying to tomcat.

 

Increase the number of maxrequestsperchild to a more sensible number, such as 10.000 or even higher, so httpd is not constantly renewing childs if you have even a minimum load.

 

Add more servers to avoid your issue, and try to use more threads too, 50 at least.

 

It seems like you are constraining too much, and apache 2.2.15 with mpm_worker can give much much more.

 

You will notice performance increasing greatly overall.

 

 

 

2014-08-22 6:06 GMT+02:00 Vattikuti, Vamsi Krishna Venkata (STSD) <vamsik@xxxxxx>:

Hi Jeff,

 

Thanks for quick response. We will check and do the needful

 

That 100 is to prevent memory leak.

 

Regards,

Krishna.

 

From: Jeff Trawick [mailto:trawick@xxxxxxxxx]
Sent: Friday, August 22, 2014 2:24 AM
To: users@xxxxxxxxxxxxxxxx
Subject: Re: MaxRequestsPerChild - New child process doesn't process requests

 

On Thu, Aug 21, 2014 at 4:39 PM, Vattikuti, Vamsi Krishna Venkata (STSD) <vamsik@xxxxxx> wrote:

Hi,

 

We are having an issue with Tomcat application accessing through proxy and details are below. Can you please check and share your feedback.

 

Issue:

We have an application(tomcat) accessed through proxy as below. Also, we have MaxRequestsPerChild setting as 100

 

Whenever MaxRequestsPerChild reached its limit, a new process is started but the application become unresponsive. It seems that new process doesn’t service any requests

We have to restart httpd to recover that

 

 

Log:

-          Access.log doesn’t show any requests for new child

-          Error_log shows that

a)      workers are initiated for new process but it didn’t service any requests

b)       

c)       processing has stuck for a minute due to some reason

   [Fri Aug 08 16:09:17 2014] [debug] ssl_engine_kernel.c(2118): [client 127.0.0.1] Certificate Verification, depth 0 [subject: /C=y/ST=y/L=y/O=y/OU=y/CN=y, issuer: /C=y/ST=y/L=y/O=y/OU=y/CN=y, serial: xyz]

   [Fri Aug 08 16:10:12 2014] [info] [client 10.150.90.25] Connection to child 6 established (server *:<port number from client>)

 

d)      SSL handshake has started but didn’t complete for 4 connections related to new process. There are no errors related to ssl

$ grep -i handshake errorlog.2014-08-08-07_06_44 | grep -c start

707

$ grep -i handshake errorlog.2014-08-08-07_06_44 | grep -c done

703

$

 

Apache version:

2.2.15

 

 

Proxy setting:

SSLProxyEngine On

SSLProxyCipherSuite ALL

SSLProxyMachineCertificateFile /var/ssl/proxy.pem

 

proxyPass /app1 https://localhost:<port number>/app1 (Tomcat)

 

Worker configuration:

KeepAlive On

MaxKeepAliveRequests 100

KeepAliveTimeout 15

<IfModule worker.c>

StartServers         1

MaxClients           25

MinSpareThreads      12

MaxSpareThreads      25

ThreadsPerChild      25

ServerLimit          1

MaxRequestsPerChild  100

MaxMemFree  50

</IfModule>

 

Thanks & Regards,

Krishna


MaxRequestsPerChild 100 is ridiculously low.  What is happening in httpd to cause you to need that setting?

 

Anyway...

 

Once an httpd child process has reached 100 connections, it initiates a graceful shutdown, which means that instead of aborting current requests it will instead wait for current requests to finish, then exit.

 

During the time that it is waiting for current requests to finish, new connections must be handled by other child processes.  BUT you set ServerLimit to 1 (and other directives such as ThreadsPerChild and MaxClients are consistent with allowing only one child process), so no other child process can be created during that time.

 

Thus, once 100 connections are handled, new clients will be blocked until existing requests finish.

 

--/--

 

My guess:  Your Java application takes a long time (maybe forever?) to handle some requests.  MaxRequestsPerChild makes it worse.  If the Java requests are slow and eventually finish, the solution is to keep a steady set of httpd child processes (having them gracefully exit when there are slow backend requests can be harmful) and increase the number of httpd threads/child processes to handle the load.

 

If some Java requests hang, see how to handle that on the Tomcat side.


Enable server status with ExtendedStatus On and watch what happens -- whether or not certain requests handled by the Java application take a relatively long time, tieing up some or all of your very limited number of httpd threads.

 

--

Born in Roswell... married an alien...
http://emptyhammock.com/

 

 




--
Born in Roswell... married an alien...
http://emptyhammock.com/


[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux