Hiyas, Joshua Slive escreveu:
On 7/25/06, Forrest Aldrich <forrie@xxxxxxxxxx> wrote:I'm unable to find the RPMs for Fedora or CentOS (which we're using) that correspond to 2.2.x. Well, actually, I found a src RPM, which is being compiled with options I need to customize (possibly via rpmbuild) - I'm from FreeBSD, so RPM is very new ;-) It has several dependencies. Suggestion(s) welcomed.Compile it yourself, either from scratch, or from the rpm spec files included with the distribution.We already have httpd-2.0 installed on the designated system, which is why I asked. I believe our goal is to utilize the mod_mem_cache on the front-end image servers. What particular problems have been observed with mod_mem_cache?I don't use it, so I don't have details. But it has gotten far less attention than mod_disk_cache. mod_disk_cache will almost always perform better because it shares the cache among all processes (mod_mem_cache has a seperate cache for each process) and can give the kernel the chance to optimise memory caching of filesystem access.
Thats one thing I like about squid. Since it uses a single daemon model (no child/threads et all), all objects are available to all clients, in one single cache (be it memory or disk cache).
That said, its biggest problem is the fact that it runs on a single CPU (I'm talking about squid 2.6 here). Of course, there is nothing forcing you to run a single instance of squid on one machine (you can run one copy per CPU).
I would recomend something like that: 'X' squid servers on your machine (one per CPU), each one with a diferent IP, and a DNS round robin to split the load betwen those servers (or a hardware LB solution, if you got the money). On each squid, it is up to you to disable disk cache or not. If you have a decent backend, I would go for memory only (and try not to restart your squid servers too much).
In theory, you're able to deliver around 3000 requests per second, per running copy of squid. Of course, we are talking about 99% cache hit (or thereabouts). On a Quad machine, that means around 12000 (static and cached) requests per second. I can barely reach 6500 requests with apache 2.0, on the same hardware configuration.
The commercial "netcache" solution is basically a tunned up squid, doing the same thing (it is usually sold as a dedicated cache, used to serve only static images).
(message for the bashers) Of course, if where not talking about static and (memory) cached content, Squid is way slower then Apache. They're distinct softwares, for diferent uses.
If you are saying you are going to mem_cache static files, I don't see the point. On a modern OS, sendfile and the systems buffer cache working together will likely beat anything that can be done in user space. (But again, I've never tried it, so I'm only talking theoretically.)
Do not forget that if you're going to serve content from a NFS partition, sendfile is not an option. This is the case of many big ISPs/portals (and my case). In these cases, you're not able to deliver all your content from a single machine, not matter how big it is (right now, we use around 30 high-end machines just to deliver our www content).
Joshua.
Regards, Domingos. -- Domingos Parra Novo Coordenador de Projetos Terra Networks Brasil S/A Tel: +55(51)3284-4275 --------------------------------------------------------------------- The official User-To-User support forum of the Apache HTTP Server Project. See <URL:http://httpd.apache.org/userslist.html> for more info. To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx " from the digest: users-digest-unsubscribe@xxxxxxxxxxxxxxxx For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx