On 3/22/2010 5:08 AM, Geoff Millikan wrote: >> if your server averages 300 simulatious connections, you need >> to start with 300 servers, and you never want it to drop >> below that number. > > Your experience might show otherwise however based on our experience - if we > averaged 300 new customers/min at once (not 300 requests/sec) a > MinSpareServers of 300 wouldn't be enough. A connection is not a customer, as we discuss below. I'm using the technical, networking term connection, and this is what you can observe in mod_status over a period of time to calculate an average (and min/max). >> Yes, a browser can make multiple connections, but this is >> typically only two >> parallel pipelines, perhaps even four. > > The "average" browser now makes 6 parallel connections per hostname per: > http://www.browserscope.org/ Interesting research, thanks! >> But 30 workers are not handling the 30 requests comprising >> one user connecting to your site!!! You just happened >> to hit a magic correlation in your testing :) > > I agree, the way I understand the prefork model to work, the 30 processes > aren't each serving one of the 30+ requests this Customer's browser made (is > keepalive tracked across processes?). However, the way the testing worked > out for us, it seems that way. We did a lot of testing to come up with our > numbers and just "ball parking" it, the number of servers seemed to work out > best when matched to the number of requests per page. Fair enough, but if your testing was of ~30 requests, and we are believing that the typical browser is making 6 simultaneous connections, then it sounds like the real magic was 6 * fudge factor of 5 ;-P --------------------------------------------------------------------- The official User-To-User support forum of the Apache HTTP Server Project. See <URL:http://httpd.apache.org/userslist.html> for more info. To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx " from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx