Search squid archive

Re: Correctoions (was TCP_SWAPFAIL/200)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20/04/2012 8:30 a.m., Linda Walsh wrote:
Amos Jeffries wrote:

On 18.04.2012 12:46, Linda Walsh wrote:

http_access allow CONNECT Safe_Ports

NOTE: Dangerous. Safe_Ports includes port 1024-65535 and other ports unsafe to permit CONNECT to. This could trivially be used as a multi-stage spam proxy or worse. ie a trivial DoS of "CONNECT localhost:8080 HTTP/1.1\n\n" results in CONNECT loop until your machines port are all used up.

----
Good point, Just wanted to allow the general case of SSL/non-SSL over any of the ports. Just tryig to get things working at this point... though have had his config for soem time and no probs -- only connector is on my side and 'me', so
I shouldn't deny myself my own service unless I try!  ;-)


Thats part of the point. There is nothing restricting this allow to just you. It allows CONNECT to anywhere with any of those ports. Better to just omit the normal "deny CONNECT SSL_ports" and leave the allow rule being the "allow localnet" one. That way you can do anything, but others can't abuse the proxy.

cache_mem       8 GB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir aufs /var/cache/squid 65535 64 64

You have multiple workers configured. AUFS does not support SMP at this time. That could be the problem you have with SWAPFAIL, as the workers collide altering the cache contents.

---
    Wah?   .. but but...how do I make use of SMP with AUFS?

If I go with uniq cache dirs that's very sub-optimal -- since I end up
with 12 separate cache areas, no?  when I want to fetch something from
the catch is there coordination about what content is in which worker's cache that will automatically invoke the correct worker? -- If so, that's cool,
but if not, then I'll reduce my hit rate by 1/N-cpus


There is shared memory doing things I have not quite got my own head around yet. I think its just shared cache_mem and rock storage which are cross-worker coordinated. The others AFAIK still need tranditional multi-process coordination like HTCP/ICP/CARP between worker processes.





To use this cache either wrap it in "if ${process_number} = N" tests for the workers you want to do caching. Or add ${process_number} to the path for each worker to get its own unique directory area.

eg:
 cache_dir aufs /var/cache/squid_${process_number} 65535 64 64

or
if ${process_number} = 1
 cache_dir aufs /var/cache/squid 65535 64 64
endif




--- As said above, how do I get multi-benefit with asynchronous writes
and multi core?

At present only "rock" type cache_dir (for small <32K objects) and cache_mem support SMP. To get 3.2 released stable this year we had to cut short from full SMP support across the board :-(. It is coming one day, with sponsorship that day can come faster, but its not today.


url_rewrite_host_header off
url_rewrite_access deny all
url_rewrite_bypass on

You do not have any re-writer or redirector configured. These url_rewrite_* can all go.

-----
    Is it harmful (it was for future 'expansion plans' -- no
rewriters yet, but was planning...)

No. Just a speed drag at present.




refresh_pattern -i (/cgi-bin/|\?) 0     0%      0

This above pattern ...


====
???? above what pattern?


"refresh_pattern -i (/cgi-bin/|\?) 0     0%      0"



refresh_pattern -i \.(ico|gif|jpg|png)   0 20%   4320
ignore-no-cache ignore-private override-expire
refresh_pattern -i ^http: 0 20% 4320 ignore-no-cache ignore-private

"private" means the contents MUST NOT be served to multiple clients. Since you say this is a personal proxy just for you, thats okay but be carefulif you ever open it for use by other people. Things like your personal details embeded in same pages are cached by this.

----
    Got it... I should add a comment in that area to that effect


    That might be a enhancement -- like -
    ignore-private-same-client



"no-cache" *actually* just means check for updates before using the cached version. This is usually not as useful as many tutorials make it out to be.

---
    Well, dang tutorials -- I'm screwed if I follow, and if I don't! ;-)


Sad, eh?





refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440

 ... is meant to be here (second to last).

refresh_pattern .               0       20%     4320
read_ahead_gap 256 MB

Uhm... 256 MB buffering per request.... sure you want to do that?

----
    I **think*** so... doesn't that mean it will buffer up to 256MB
of a request before my client is read for it?

Yes, exactly so. In RAM, which is the risky part. If Squid process starts swapping your service speed goes down the drain very fast.


    I think of the common case where I am saving a file and it takes me
a while to find the dir to save to.  I tweaked a few params in this area,
and it went from having to wait after I decided, to by the time I decided, it
was already downloaded.

    Would this be responsible for that?


Yes. I'm just highlighting 256 MB as a very big buffer.

Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux