EricGood point. Further investigation (snooping with tcpdump) shows that the browser (Firefox 3.5.9) sometimes retries the request. The number of retries appears random. I don't think we've client-side (javascript) code retrying requests, but I don't know.
BR A PS. Note that for wget:
-t number --tries=number Set number of retries to number.
On Mar 31, 2010, at 8:14 PM, Eric Covener wrote:
On Wed, Mar 31, 2010 at 7:55 PM, ARTHUR GOLDBERG <artg@xxxxxxxxxx> wrote:httpd processes die as expected when their VM size reaches 1000 MB.But here's the problem. After the httpd serving the Request dies, a new oneis created to handle the same request. And so on.I think this is all done in Apache, as the access log doesn't show anotherrequest from the client. Is this correct?That doesn't sound right at all. You'll get a replacement if your prefork config and the load dictates one, but it doesn't pick up where the other left off. The request is lost (since none of them ever end, you might not see an access log entry at all. Try it with wget) -- Eric Covener covener@xxxxxxxxx ---------------------------------------------------------------------The official User-To-User support forum of the Apache HTTP Server Project.See <URL:http://httpd.apache.org/userslist.html> for more info. To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx " from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx
--------------------------------------------------------------------- The official User-To-User support forum of the Apache HTTP Server Project. See <URL:http://httpd.apache.org/userslist.html> for more info. To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx " from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx