Search squid archive

Re: throughput limitation from cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Also sprach Henrik Nordstrom <hno@xxxxxxxxxxxxxxx> (Sat, 14 Jan 2006
14:26:20 +0100 (CET)):
> On Sat, 14 Jan 2006, Richard Mittendorfer wrote:
> >> Why I ask is because diskd is known to be somewhat slow on large
> >cache
> >
> > Not really large. 2x 1G. It's no storage bottleneck I believe.
> 
> large cache hits == hits on largeish cached objects.

Oh, sure. Didn't had enough coffee this morning.. :-)
 
> >> hits in certain situations UNLESS there is sufficient traffic to
> >keep > Squid reasonably buzy (i.e. problems if you are the only user,
> >or very > few  users). And the same for aufs in older versions of
> >Squid.
> >
> > See. Would fit.
> 
> A quick test if this is your problem is to reconfigure your Squid to
> use  the ufs cache_dir type.

7,30M/s. That helps. Little bit slower with aufs: 6,85M/s.  

hmm.. However, aufs (posix-threads?) seem to like/malloc a lot of
Memory. Running on mere 256M Ram and offering a good many services,
Commited_AS klimbs to 550M (340M w/ diskd). And Squid hasn't been used
yet. I suppose it will get swapped out way more easily. Will consumed
memory be much higher with aufs in contrast to diskd(/ufs)?

I'll see in a few hours/days.

> Regards
> Henrik

THX ritch

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux