Re: Problems with Event MPM Performance Tuning in 2.4.18

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[Re-arranging posts a bit to follow the email thread flow]

2016-05-31 18:49 GMT+02:00 Houser, Rick <rick.houser@xxxxxxxxxxx>:


 

From: Luca Toscano [mailto:toscano.luca@xxxxxxxxx]
Sent: Tuesday, May 31, 2016 11:02
To: users@xxxxxxxxxxxxxxxx
Subject: Re: Problems with Event MPM Performance Tuning in 2.4.18

 

Hi Rick!

 

2016-05-31 15:57 GMT+02:00 Houser, Rick <rick.houser@xxxxxxxxxxx>:

I have to deal with mod_cluster, and it is extremely memory hungry (in the GB range per process).  As mitigation, I’m trying to get down to a single apache worker process per host when we aren’t under heavy load.  That would save me about 6GB per host.

 

We have several hosts running the exact same thing behind a load balancer and I’ve never seen a crash, so I’m not concerned with running a single instance.  Running 4 20 thread instances is almost 4 times the memory of this one instance, for example.

 

 

This is the relevant portion of the configuration:

 

LoadModule mpm_event_module modules/mod_mpm_event.so

ServerLimit 8

StartServers 1

ThreadLimit 80

ThreadsPerChild 80

MaxRequestWorkers 640

MaxSpareThreads 120

MinSpareThreads 8

 

 

 

The top of mod-Status:

 

Apache Server Status for HOSTNAME (via 10.X.X.X)

Server Version: CUSTOMSTRING/2.4.18 (Unix) OpenSSL/1.0.1e-fips mod_cluster/1.3.1.Final

Server MPM: event

Server Built: Dec 16 2015 16:07:29

________________________________________

Current Time: Tuesday, 17-May-2016 14:37:00 EDT

Restart Time: Monday, 02-May-2016 09:36:16 EDT

Parent Server Config. Generation: 10

Parent Server MPM Generation: 9

Server uptime: 15 days 5 hours 44 seconds

Server load: 0.72 0.75 0.89

Total accesses: 39007867 - Total Traffic: 1.7 GB

CPU Usage: u2533.2 s168.49 cu0 cs0 - .206% CPU load

29.7 requests/sec - 1364 B/second - 45 B/request

5 requests currently being processed, 155 idle workers

PID         Connections       Threads                Async connections

                total       accepting             busy      idle         writing  keep-alive           closing 

11397    35           yes         2              78           0              33           0             

29323    26           yes         3              77           0              23           0             

Sum       61                          5              155         0              56           0             

................................................................

................________________________________________________

_______W____W___________________................................

................................................____________W___

______W__________________________________W______________________

................................................................

................................................................

................................................................

................................................................

................................................................

 

 

The idle threads here usually stays around the mid 150s.  These particular workers were started about 40 minutes apart, but I have the similar pattern showing in other regions with similar start times and the same workers being up for over a month.

 

Given the MaxSpareThreads 120, I would expect this to drop the second worker fairly quickly and work as described (https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxsparethreads).  But, that’s not happening and I'm stuck with two processes handling the load.  It's acting almost as if there is a "ServerMin 2" directive hard-coded or something.

 

This certainly looks like a bug (whether in the documentation or the code itself).  Any suggestions on how to get this to work before I submit a bug ticket?

 

IIRC we had a similar issue earlier on in this email list: 

 

 

Afaik the fix has not been ported to the 2.4.x branch yet.. If this is what you are experiencing, we'll follow up with the devs to check what is the status of the backport proposal.

 

Let me know!

 

Luca


 

 Thank you very much for the quick response, Luca.

 

It definitely sounds like that could be related to the problems I’m having.  Looking at the patch, however, both the original and the replacement seem to be enforcing a minimum value to MinSpareThreads that would correspond to at least one completely idle process.  From my perspective, I think that is contradicting the documentation provided for Max/MinSpareThreads and preventing me from spawning additional processes only when the existing ones start to become full.

 

I’m going to dive into that specific section of code further and see if I can’t dig something up.

 


So afaik the current 2.4 behavior is to enforce the minimum number of spare threads as 

ThreadsPerChild * num_buckets 

with num_buckets equal 1 if you are not leveraging SO_REUSEPORT (https://httpd.apache.org/docs/current/mod/mpm_common.html#listencoresbucketsratio). This means that if you have only one busy thread the minimum number of httpd processes running will be always two. The new threshold is the more conservative:

ThreadsPerChild * (num_buckets - 1) + num_buckets

In your case, with num_buckets = 1, the lower bound of min spare threads is one, enabling the possibility to get down to only one httpd process (because the MinSpareThread lower bound won't mess with your Min/MaxSpareThread settings anymore). 

More info in Yann's explanation: http://svn.apache.org/viewvc?view=revision&revision=1737447

Let me know if it makes sense! If so, to fix your problem you'd need to apply the patch to the httpd source and recompile or wait for the backport to be reviewed/merged into the 2.4.x branch (and released afterwards).

Luca


[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux