tips for optimizing php Web proxy sites?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I run several Web proxy sites (with random names like goofycake.com,
cheesecamera.com, etc.) running PHProxy -- sites that you use to fetch
the content of other sites indirectly. The response times for the
sites are decent during the day when they get heavy usage, but I'm
wondering if I'm missing some obvious way to improve their
performance. Some of them get more hits *and* have faster response
times during the day than others, even when the two machines have
essentially the same hardware.

I've already read http://httpd.apache.org/docs/1.3/misc/perf-tuning.html

I know, I know, asking how to improve Web server perf (without
spending more money) is the oldest question in the book, but there are
certain shortcuts and hacks that may not be acceptable for regular Web
sites but would be acceptable for these, and special circumstances
that may make some tricks work better than they normally would. Half
the time that our users try to get to our sites, they're blocked
anyway by Internet blockers anyway, and other times when users can get
to our proxy sites, they find that the site they're trying to browse
doesn't work through our proxies. The point is that using these sites
is a "best effort" kind of thing and it's acceptable to drop
connections or do other funny things if it helps serve more users all
around. For example, a hack we have running right now on each server
is a script that checks every minute to see if the local Web server is
responding quickly, and if it isn't, just restarts the httpd service
(and who cares about anybody who's connected at that moment). So if
75% of our CPU is being used by 5% of requests, or something like
that, then it would be acceptable to just cap CPU usage per request,
if that's possible.

Currently, whenever the machines are slow, the bottleneck appears to
be CPU (the servers have usually not reached MaxClients, the swapfile
is usually low and in any case our script that runs every minute will
restart the httpd service any time swap exceeds 150 M). Can I cap the
amount of CPU that each request uses, or will that not solve the
problem, because even though each request would use less CPU each
second, it would take longer to run, so the total burden on the CPU in
the long run will be the same? Can I sprinkle some dust over the
PHProxy script to make it run faster? What would you do if you were
running these servers? Sorry that's so open-ended, but so is the problem I'm tryint to solve.

Thanks, hope someone might have some ideas!

	-Bennett

bennett@xxxxxxxxxxxxx     http://www.peacefire.org
(425) 497 9002


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx
  "   from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx
For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx


[Index of Archives]     [Open SSH Users]     [Linux ACPI]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Squid]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux