-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 I do not understand the love of archaeological fossils. It repositories such junk lying around? :-D 20.12.15 0:56, Jean Christophe Ventura пишет: > Hi, > > I'm currently working to migrate RHEL5 2.7 Squid to RHEL7 3.3. > > I have migrated the config files to be 3.3 compliant (CIDR, remove of > deprecated function,change cache from UFS to AUFS) without any change > (cache mem, policy, smp) > > The new platform is a 4 node R610 (24 proc hyperthreading activate) > with 48GB of RAM, only 143GB disk in RAID for OS and cache. Each node > is connected to the network using 2x1Gbit bonding 2/3 level (some > network port are available on the server). > > bandwidth allocated for Internet users 400Mbit > > The difference between the old plateform and the new one doesn't seem > to be very fantastic :P > I have read the mailing list history alot. > > Squid release: > So i know 3.3 isn't anymore maintain but this infrastructure will be > not maintain by myself and i don't think that people behind will do > the update them self > If a official repository exist, maybe this question will be reopen > (from what i have read it's more some of you build packages from > source and give them to people) > > Squid auth: > It's transparent/basic auth only filtering some ip with acl. > > Squid bandwidth: > Currently a squid node treat something like 30/50Mbit (information > recovered using iftop) > From previous viewed mail i think it's normal for a non-smp configuration > > Squid measure: > [root@xxxx ~]# squidclient mgr:5min | grep 'client_http.requests' > client_http.requests = 233.206612/sec > other info > Cache information for squid: > Hits as % of all requests: 5min: 6.8%, 60min: 7.1% > Hits as % of bytes sent: 5min: 4.7%, 60min: 4.4% > Memory hits as % of hit requests: 5min: 21.4%, 60min: 21.5% > Disk hits as % of hit requests: 5min: 34.7%, 60min: 30.8% > Storage Swap size: 9573016 KB > Storage Swap capacity: 91.3% used, 8.7% free > Storage Mem size: 519352 KB > Storage Mem capacity: 99.1% used, 0.9% free > Mean Object Size: 47.71 KB > > Now question and advise : > > This metrics seem too low for me. anyone of you agree ? > > 4 node x 50Mbit node= 200Mbit > To treat the maxbandwidth (400Mbit) + the lost of one host i need to > configure 4 thread by node. > Is there any reason or brillant idea for more (i will have some core > still available) ? calculation too empirical ? > > This url http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster > seem to be a good start :P > Using this method i can interconnect each proxy to share their cache > (maybe using dedicated network port). Usefull or not ? may this > increase the hit ratio ? if this idea is'nt stupid interconnet using > the frontend only or directy to each ? > > For now i have : > - 100GB of disk available for cache > - 40GB of RAM (let 8 for OS + squid disk cache related ram usage) > > 1 front with the RAM cache and 4 back with disk cache. > AUFS or ROCK cache? mix of them ? 50% each ? maybe another rules ? > (i think it's will be linked to the cache content but any advise or > method is welcome) > > I can get more speed and/or space for disk cache using SAN, do you > know if the data is sequential or random ? > > Any advise/rules to increase the hit ratio ? :) > Any general advise/rules ? > > Thanks for your help > > > Jean Christophe VENTURA > _______________________________________________ > squid-users mailing list > squid-users@xxxxxxxxxxxxxxxxxxxxx > http://lists.squid-cache.org/listinfo/squid-users -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJWdbJhAAoJENNXIZxhPexGXSsIAIlKiu7C+Bw8a7XMxObb/NKw WF8Ms77hGIfbcAdVm3zIoAeQinWLdXMU5XqYZLRUFyq1ui86bvxZoYa8VKcXIMgY Uxyg+Un3S5nAWy0TePU3Q0DdixW96QGAPXvQAEJUAXNEWUnCiArwQt4aRZFOHBzT HE3bsyjJZWgHKpW7YV+rrD6vffwfsn1G7BmCG1CDTjfnkdbW73M7slwUMVPTolVV jVLp0y07VPGDg3iMVG8XMSmeCnPYIVfJk+0VMtmX2pv87voxWU2AQfrcJXXa0Jjm f/YWBAOQS20C+2GaH2OAIQ1kr0+yOgpBRfiAhNlUrQ933BmVDCzepH/ZokxVHvQ= =E5Oo -----END PGP SIGNATURE----- _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users