Hello. A bit of a long shot... On my website, there is a directory containing a relatively large number of big files (PDFs). Every now and then, there is a user that sees them, gets very excited and downloads them all within a short period of time (probably using FF's DownThemAll plugin or something similar). Fair enough, that's what they're for, but, especially if the user is on a slow connection, this will make them use all available child processes, causing the site to be unreachable by others, which leads to swapping and, eventually, crashing. I'm looking for a quick, on the fly way to prevent this from happening (in the long run, the whole server code will be re-written, so I should be able to use some module - or write one myself). I googled a bit about limiting the number of child processes per IP address, but that seems to be a tricky business. Then I was thinking, is there perhaps a nice way of setting MaxClients 'locally' to a small number, so that no more than, say, 10 or 20 child processes will be dealing with requests from a certain directory, while the other processes will happily be dealing with the rest? E.g. (non-working example!) something like MaxClients 100 <Directory /pdf> LocalMaxClients 20 </Directory> I know this won't be the nicest solution - it would still prevent other, non-greedy users to download the PDFs while the greedy person is leaching the site - but something like this would make my life a lot easier for the time being. Oh, and perhaps I should add that I don't really care about bandwidth. Any ideas? Martijn. --------------------------------------------------------------------- The official User-To-User support forum of the Apache HTTP Server Project. See <URL:http://httpd.apache.org/userslist.html> for more info. To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx " from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx