Re: Restricting Postgres

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Matt - Very interesting information about squid effectiveness, thanks.

Martin,
You mean your site had no images? No CSS files? No JavaScript files? Nearly
everything is dynamic?

I've found that our CMS spends more time sending a 23KB image to a dial up
user than it does generating and serving dynamic content.

This means that if you have a "light" squid process who caches and serves
your images and static content from it's cache then your apache processes
can truly focus on only the dynamic data.

Case in point: A first time visitor hits your home page.  A dynamic page is
generated (in about 1 second) and served (taking 2 more seconds) which
contains links to 20 additional files (images, styles and etc). Then
expensive apache processes are used to serve each of those 20 files, which
takes an additional 14 seconds.  Your precious application server processes
have now spent 14 seconds serving stuff that could have been served by an
upstream cache.

I am all for using upstream caches and SSL accelerators to take the load off
of application servers.  My apache children often take 16 or 20MB of RAM
each.  Why spend all of that on a 1.3KB image?

Just food for thought.  There are people who use proxying in apache to
redirect expensive tasks to other servers that are dedicated to just one
heavy challenge.  In that case you likely do have 99% dynamic content.

Matthew Nuzum		| Makers of "Elite Content Management System"
www.followers.net		| View samples of Elite CMS in action
matt@xxxxxxxxxxxxx	| http://www.followers.net/portfolio/

-----Original Message-----
From: pgsql-performance-owner@xxxxxxxxxxxxxx
[mailto:pgsql-performance-owner@xxxxxxxxxxxxxx] On Behalf Of Martin Foster

Matt Clark wrote:

> In addition we (as _every_ high load site should) run Squid as an
> accelerator, which dramatically increases the number of client connections
> that can be handled.  Across 2 webservers at peak times we've had 50,000
> concurrently open http & https client connections to Squid, with 150
Apache
> children doing the work that squid can't (i.e. all the dynamic stuff), and
> PG (on a separate box of course) whipping through nearly 800 mixed
selects,
> inserts and updates per second - and then had to restart Apache on one of
> the servers for a config change...  Not a problem :-)
> 
> One little tip - if you run squid on the same machine as apache, and use a
> dual-proc box, then because squid is single-threaded it will _never_ take
> more than half the CPU - nicely self balancing in a way.
> 
> M
> 

I've heard of the merits of Squid in the use as a reverse proxy. 
However, well over 99% of my traffic is dynamic, hence why I may be 
experiencing behavior that people normally do not expect.

As I have said before in previous threads, the scripts are completely 
database driven and at the time the database averaged 65 queries per 
second under MySQL before a migration, while the webserver was averaging 
2 to 4.

	Martin Foster
	Creator/Designer Ethereal Realms
	martin@xxxxxxxxxxxxxxxxxxx






[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux