Re: setting MaxClients locally?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Allen Pulsifer wrote:
Hi Martin,

You could run two completely separate instances of httpd, one listening on
port 80 with MaxClients=100 serving your normal content, and the other
listening at port 8000 with MaxClients=20 serving your large PDF's.  This
would require two completely separate http.conf files (for example,
http.conf and http-pdf.conf), and launching the second instance using the
httpd -f option.  You would also have to change all links to the PDF's from
www.yoursite.com/file.pdf to www.yoursite.com:8000/file.pdf.  Alternately,
you could assign the second server instance to a different IP address
instead of a different port, configure DNS to make this IP address answer to
a subdomain like pdfs.yoursite.com, and then change the PDF links from
www.yoursite.com/file.pdf to pdfs.yoursite.com/file.pdf.

An alternative to changing all your links, could be for you to use reverse proxy.

i.e.
<Location /pdf>
   ProxyPass / http://localhost:8080/
   ProxyPassReverse / http://localhost:8080/
</Location>

This way you could ensure that the change is transparent to the end user, and they remain on your server under your control. However doing it this way you will only limit connections from the front end server to the back end server.

This would be global to all users attempting to 'suck down' all your files, but it will stop the server from being flattened in the process.

There is no easy way of doing what you want to do, directly that is, without farming the work off to another httpd process, of some kind.



Another option might be to move the PDF files to a hosting service such as
Amazon S3, http://www.amazon.com/S3-AWS-home-page-Money/.  Files uploaded to
Amazon S3 can be made publicly available at a URL such as
http://s3.amazonaws.com/your-bucket-name/file.pdf or
http://your-bucket-name.s3.amazonaws.com/file.pdf, or using DNS tricks, at a
virtual host such as pdfs.yoursite.com/file.pdf or
www.yourpdfsite.com/file.pdf.  See
http://docs.amazonwebservices.com/AmazonS3/2006-03-01/VirtualHosting.html.
The cost of S3 is $0.18 per GB of data transfer, plus storage and request
charges.

Allen

-----Original Message-----
From: Martijn [mailto:sweetwatergeek@xxxxxxxxxxxxxx] Sent: Friday, June 08, 2007 5:27 AM
To: users@xxxxxxxxxxxxxxxx
Subject:  setting MaxClients locally?


Hello.

A bit of a long shot... On my website, there is a directory containing a relatively large number of big files (PDFs). Every now and then, there is a user that sees them, gets very excited and downloads them all within a short period of time (probably using FF's DownThemAll plugin or something similar). Fair enough, that's what they're for, but, especially if the user is on a slow connection, this will make them use all available child processes, causing the site to be unreachable by others, which leads to swapping and, eventually, crashing.

I'm looking for a quick, on the fly way to prevent this from happening (in the long run, the whole server code will be re-written, so I should be able to use some module - or write one myself). I googled a bit about limiting the number of child processes per IP address, but that seems to be a tricky business. Then I was thinking, is there perhaps a nice way of setting MaxClients 'locally' to a small number, so that no more than, say, 10 or 20 child processes will be dealing with requests from a certain directory, while the other processes will happily be dealing with the rest? E.g. (non-working example!) something like

MaxClients 100

<Directory /pdf>
LocalMaxClients 20
</Directory>

I know this won't be the nicest solution - it would still prevent other, non-greedy users to download the PDFs while the greedy person is leaching the site - but something like this would make my life a lot easier for the time being. Oh, and perhaps I should add that I don't really care about bandwidth.

Any ideas?

Martijn.

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project. See <URL:http://httpd.apache.org/userslist.html> for more info. To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx
   "   from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx
For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx
   "   from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx
For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx
  "   from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx
For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx


[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux