Search squid archive

Re: Fwd: Squid configuration advise

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok thanks for your info

2015-12-19 21:02 GMT+01:00 Yuri Voinov <yvoinov@xxxxxxxxx>:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
 
My best result I can achieve this day with 3.5.12 is:

http://i.imgur.com/Lm6MkwH.png

Maximum hit level is 50-55%. With VERY complex configuration.

With old good 3.4.14 I achieved cache hit over 86%, but this is in the past.

20.12.15 1:59, Jean Christophe Ventura пишет:
> Reading the mailing list i know that sslbump is the question to get a more
> usefull hit.
>
> But this proxy infrastructure isn't acting as my company proxy but isp
> proxy for the company client and i cann't go to this way without some guy
> like lawers/security guys :)
>
> I know there is no magic button to get the full internet at home ;) but at
> least my job with this project constraint is to get the best i can :)
>
> 2015-12-19 20:51 GMT+01:00 Yuri Voinov <yvoinov@xxxxxxxxx>:
>
>>
> I'm sorry that upset. :)
>
> 20.12.15 0:56, Jean Christophe Ventura пишет:
> >>> Hi,
> >>>
> >>> I'm currently working to migrate RHEL5 2.7 Squid to RHEL7 3.3.
> >>>
> >>> I have migrated the config files to be 3.3 compliant (CIDR, remove of
> >>> deprecated function,change cache from UFS to AUFS) without any change
> >>> (cache mem, policy, smp)
> >>>
> >>> The new platform is a 4 node R610 (24 proc hyperthreading activate)
> >>> with 48GB of RAM, only 143GB disk in RAID for OS and cache. Each node
> >>> is connected to the network using 2x1Gbit bonding 2/3 level (some
> >>> network port are available on the server).
> >>>
> >>> bandwidth allocated for Internet users 400Mbit
> >>>
> >>> The difference between the old plateform and the new one doesn't seem
> >>> to be very fantastic :P
> >>> I have read the mailing list history alot.
> >>>
> >>> Squid release:
> >>> So i know 3.3 isn't anymore maintain but this infrastructure will be
> >>> not maintain by myself and i don't think that people behind will do
> >>> the update them self
> >>> If a official repository exist, maybe this question will be reopen
> >>> (from what i have read it's more some of you build packages from
> >>> source and give them to people)
> >>>
> >>> Squid auth:
> >>> It's transparent/basic auth only filtering some ip with acl.
> >>>
> >>> Squid bandwidth:
> >>> Currently a squid node treat something like 30/50Mbit (information
> >>> recovered using iftop)
> >>> From previous viewed mail i think it's normal for a non-smp configuration
> >>>
> >>> Squid measure:
> >>> [root@xxxx ~]# squidclient mgr:5min | grep 'client_http.requests'
> >>> client_http.requests = 233.206612/sec
> >>> other info
> >>> Cache information for squid:
> >>>         Hits as % of all requests:      5min: 6.8%, 60min: 7.1%
> >>>         Hits as % of bytes sent:        5min: 4.7%, 60min: 4.4%
> >>>         Memory hits as % of hit requests:       5min: 21.4%, 60min: 21.5%
> >>>         Disk hits as % of hit requests: 5min: 34.7%, 60min: 30.8%
> >>>         Storage Swap size:      9573016 KB
> >>>         Storage Swap capacity:  91.3% used,  8.7% free
> >>>         Storage Mem size:       519352 KB
> >>>         Storage Mem capacity:   99.1% used,  0.9% free
> >>>         Mean Object Size:       47.71 KB
> >>>
> >>> Now question and advise :
> >>>
> >>> This metrics seem too low for me. anyone of you agree ?
> >>>
> >>> 4 node x 50Mbit node= 200Mbit
> >>> To treat the maxbandwidth (400Mbit) + the lost of one host i need to
> >>> configure 4 thread by node.
> >>> Is there any reason or brillant idea for more (i will have some core
> >>> still available) ? calculation too empirical ?
> >>>
> >>> This url http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster
> >>> seem to be a good start :P
> >>> Using this method i can interconnect each proxy to share their cache
> >>> (maybe using dedicated network port). Usefull or not ? may this
> >>> increase the hit ratio ? if this idea is'nt stupid interconnet using
> >>> the frontend only or directy to each ?
> >>>
> >>> For now i have :
> >>> - 100GB of disk available for cache
> >>> - 40GB   of RAM (let 8 for OS + squid disk cache related ram usage)
> >>>
> >>> 1 front with the RAM cache and 4 back with disk cache.
> >>> AUFS or ROCK cache? mix of them ? 50% each ? maybe another rules ?
> >>> (i think it's will be linked to the cache content but any advise or
> >>> method is welcome)
> >>>
> >>> I can get more speed and/or space for disk cache using SAN, do you
> >>> know if the data is sequential or random ?
> >>>
> >>> Any advise/rules to increase the hit ratio ? :)
> >>> Any general advise/rules ?
> >>>
> >>> Thanks for your help
> >>>
> >>>
> >>> Jean Christophe VENTURA
> >>> _______________________________________________
> >>> squid-users mailing list
> >>> squid-users@xxxxxxxxxxxxxxxxxxxxx
> >>> http://lists.squid-cache.org/listinfo/squid-users
>
>>
>> _______________________________________________
>> squid-users mailing list
>> squid-users@xxxxxxxxxxxxxxxxxxxxx
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWdbe9AAoJENNXIZxhPexGnnMIAJCpFZJVppW4auNDES0Z1SOX
Essle25MJ1yKh3BkkkRhJa4pJjCa/9fYrnTwOTQ3IFDvuILbesjxxtIBkbBjEvzi
Ka5oHoKokN8/9kMwDjBUYUua8aHqDhPaQ197bD6HZTzX5nzq3DU3Wnoa8jkyLSX9
LJiNbdYhvbJN4CH3ui8Q0JOKlTbYM43Jc0mTfW/K3Rv2Yv68EzYTwx7OeniXyMNO
SitP72nntqntAOL48s9stG9vr3j0bPZRu/ejcW4LoTbj719nuhSYeB7a0mTgStZp
Y1Elcm61cTefFgG9Cvggnr8mkp1AionnZHffPmkaZDzuParz6UMsHcCKmZFH4BE=
=y1hW
-----END PGP SIGNATURE-----


_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux