Search squid archive

Re: Performance Extremely squid configuration advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2011/1/8 Mohsen Saeedi <mohsen.saeedi@xxxxxxxxx>:
> I know about coss. it's great. but i have squid 3.1 and i think it's
> unstable in 3.x version. that's correct?

I need "null" for memory-only cache, which is not provided in squid-3,
so it's all squid-2.x in product environment.
Of cource, we tested every squid-3.x, many bugs and poor performance
to squid-2.x. We tested squid-2.HEAD too, it's worth to try.

aufs acts very bad under high presure, with 8GB memory and least SATA
aufs space per instance, it's still too hard to over 180Mbps.

I haven't try diskd yet.

> On Fri, Jan 7, 2011 at 8:05 PM, Drunkard Zhang <gongfan193@xxxxxxxxx> wrote:
>> 2011/1/8 Mohsen Saeedi <mohsen.saeedi@xxxxxxxxx>:
>>> and now which filesystem has better performance. aufs or diskd? on the
>>> SAS hdd for example.
>>
>> Neither of them, we are using coss on SATA. And coss on SSD is under
>> testing, looks good still.
>>
>>> On Fri, Jan 7, 2011 at 7:56 PM, Drunkard Zhang <gongfan193@xxxxxxxxx> wrote:
>>>>
>>>> 2011/1/7 Amos Jeffries <squid3@xxxxxxxxxxxxx>:
>>>> > On 07/01/11 19:08, Drunkard Zhang wrote:
>>>> >>
>>>> >> In order to get squid server 400M+ traffic, I did these:
>>>> >> 1. Memory only
>>>> >> IO bottleneck is too hard to avoid at high traffic, so I did not use
>>>> >> harddisk, use only memory for HTTP cache. 32GB or 64GB memory per box
>>>> >> works good.
>>>> >
>>>> > NP: The problem in squid-2 is large objects in memory. Though the more
>>>> > objects you have cached the slower the index lookups (very, very minor
>>>> > impact).
>>>> >
>>>>
>>>> With 6-8GB memory, there's about 320K objects per instance, so no
>>>> significant delay would yield.
>>>>
>>>> >>
>>>> >> 2. Disable useless acl
>>>> >> I did not use any acl, even default acls:
>>>> >> acl SSL_ports port 443
>>>> >> acl Safe_ports port 80 Â Â Â Â Â# http
>>>> >> acl Safe_ports port 21 Â Â Â Â Â# ftp
>>>> >> acl Safe_ports port 443 Â Â Â Â # https
>>>> >> acl Safe_ports port 70 Â Â Â Â Â# gopher
>>>> >> acl Safe_ports port 210 Â Â Â Â # wais
>>>> >> acl Safe_ports port 1025-65535 Â# unregistered ports
>>>> >> acl Safe_ports port 280 Â Â Â Â # http-mgmt
>>>> >> acl Safe_ports port 488 Â Â Â Â # gss-http
>>>> >> acl Safe_ports port 591 Â Â Â Â # filemaker
>>>> >> acl Safe_ports port 777 Â Â Â Â # multiling http
>>>> >> acl Safe_ports port 901 Â Â Â Â # SWAT
>>>> >> http_access deny !Safe_ports
>>>> >> http_access deny CONNECT !SSL_ports
>>>> >>
>>>> >> squid itself do not do any acls, security is ensured by other layers,
>>>> >> like iptables or acls on routers.
>>>> >
>>>> > Having the routers etc assemble the packets and parse the HTTP-layer
>>>> > protocol to find these details may be a larger bottleneck than testing for
>>>> > them inside Squid where the parsing has to be done a second time anyway to
>>>> > pass the request on.
>>>> >
>>>>
>>>> We only do http cache on tcp port 80, and the incoming source IPs is
>>>> controllable, so iptables should be OK.
>>>>
>>>> > Note that the default port and method ACL in Squid are validating on the
>>>> > HTTP header content URLs not the packet destination port.
>>>> >
>>>> >>
>>>> >> 3. refresh_pattern, mainly cache for pictures
>>>> >> Make squid cache as long as it can, so it looks likes this:
>>>> >> refresh_pattern -i \.(jpg|jpeg|gif|png|swf|htm|html|bmp)(\?.*)?$
>>>> >> 21600 100% 21600 Âreload-into-ims ignore-reload ignore-no-cache
>>>> >> ignore-auth ignore-private
>>>> >>
>>>> >> 4. multi-instance
>>>> >> I can't get single squid process runs over 200M, so multi-instance
>>>> >> make perfect sense.
>>>> >
>>>> > Congratulations, most can't get Squid to go over 50MBps per instance.
>>>> >
>>>> >> Both CARP frontend and backend (for store HTTP files) need to be
>>>> >> multi-instanced. Frontend configuration is here:
>>>> >> http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend
>>>> >>
>>>> >> I heard that squid is still can't process "huge" memory properly, so I
>>>> >> splited big memory into 6-8GB per instance, which listens at ports
>>>> >> lower than 80. And on a box with 32GB memory CARP frontend configs
>>>> >> like this:
>>>> >>
>>>> >> cache_peer 192.168.1.73 parent 76 0 carp name=73-76 proxy-only
>>>> >> cache_peer 192.168.1.73 parent 77 0 carp name=73-77 proxy-only
>>>> >> cache_peer 192.168.1.73 parent 78 0 carp name=73-78 proxy-only
>>>> >> cache_peer 192.168.1.73 parent 79 0 carp name=73-79 proxy-only
>>>> >>
>>>> >> 5. CARP frontend - cache_mem 0 MB
>>>> >> I used to use "cache_mem 0 MB", time flies, I think that files smaller
>>>> >> than 1.5KB would be waste if GET from CARP backend, am I right? I use
>>>> >> these now:
>>>> >>
>>>> >> cache_mem 5 MB
>>>> >> maximum_object_size_in_memory 1.5 KB
>>>> >
>>>> > The best value here differs on every network so we can't answer your
>>>> > question with details.
>>>>
>>>> Here's my idea: did 3 times of tcp hand shake, and transfered data in
>>>> ONE packet is silly, so let it store locally. According to my
>>>> observation, no more than 500 StoreEntries per CARP frontend.
>>>>
>>>> > Log analysis of live traffic will show you the amount of objects your Squid
>>>> > are handling in each size bracket. That will determine where the best place
>>>> > to set this limit at to reduce the lag on small items versus your available
>>>> > cache_mem memory.
>>>> >
>>
>
>
>
> --
> Seyyed Mohsen Saeedi
> ØÛØ ÙØØÙ ØØÛØÛ
>



-- 
åçæ
gongfan193@xxxxxxxxx
zhangsw@xxxxxxxxxxxxx
18601633785



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux