Search squid archive

Re: Sudden but sustained high bandwidth usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



regarding swapfail 
after i suffer alot even on latest squid v
what i found is  if you have lets say 32geg ram and you specify cache_mem 10
GB or whatever size you have
if it reach that  it start happening swap fail mostly on fast smole object
like   js file or jpg  not more then 100k max 
and same problem cache_dir aufs /mnt/cache-a 500000 when you reach max
specify size dose not Mather if diskd or aufs probably rock also

what im trying to say is  during that period wen max storage happen it start
fkp 
reason  unknown

1= replacing file  last freq..used it delete from storage and it dose not
delete the object info from swap.state result swap fail

2= might be write issue  =  saving the object info to the swap.state and it
its not done on storage wen  storage full it get fkp

3 = i don't know if may be if   that developer guys should check on source
code if the write to swap.state happen before the file get stored wish is
not good
it should be  wen the file get stored and signal the write to swap.state to
save the object detail in it not  befor
as i says its what my testing and experiment that i have reach to those
point. 
 i might be wrong



--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Sudden-but-sustained-high-bandwidth-usage-tp4676366p4676635.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux