Hi, I am wondering if Squid is the right tool to solve a scaling problem we're having. Our static content is currently served directly from Apache boxes to the end-user: User <=> Apache Originally it was just one Apache box but its Disk IO became saturated and now we have three Apache boxes, each with their own copy of our library on direct-attached storage. The problem is our library is getting quite large and although three copies of everything is nice, I think continuing down this road any further is going to result in a lot of unnecessary duplication of content (read: $.) So, I'm thinking of changing it to be more like this: User <=> Squid CARP <=> Squid Caches <=> Apache The idea being we can scale delivery capacity and library capacity independently. When our delivery needs grow/shrink, we'll add/remove machines in the Cache layer. These machines would be RAM heavy and have a high spindle to GB ratio (or SSD.) When our library grows/shrinks we'll attach/detach storage to the Apache origin server. We'll use two of the existing Apache servers as the origin with failover for redundancy. So, is this a sane use of Squid? Is there a better way to approach this? Any words of wisdom greatly appreciated! J