Search squid archive

Re: Huge Cache / Memory Usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



To clarify,

I am running a forward cache, targeted to reduce egress on my link for
~100 users.

Cheers.

On Wed, Dec 15, 2010 at 11:17 AM, Sunny <sunyucong@xxxxxxxxx> wrote:
> Hi there,
>
> I am working on building a cache with squid 3.1.9.  I've got two
> machine with 4G ram and two 500G disk each. I want to make cache as
> large as possible to maximize utilization of my two big disk.
>
> However, I soon found out I am being extremely limited by memory. lots
> of swapping starts to happen when my cache exceed 9M objects. Also
> everytime I want to restart cache, it would spend a hour just to
> rescan all the entities into memory. and it just keep taking longer.
> And From iostat -x -d , my two disk utilization is often below 5%
> during scan and serving, which is kind of a waste.
>
> from some doc, I found statement that squid needs 14M (on 64 bit) for
> each 1G on disk. If that's the case, to fill 500G disk I would need
> ~8G ram just to hold the metadata.
>
> So my question is:
>
> 1. Is this statement true? Can squid somehow lookup directly on the
> disk to imporve disk utilization and reduce memory needs?
> 2. How big the cache people usually have? I think having a 500G cache
> will definitely improve hit ratio and byte hit ratio, is that true?
> 3. what other optimization is needed for building huge cache?
>
> Thanks in advance.
>



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux