Search squid archive

Re: Working Squid Configuration, but needs some fat reduction

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/02/2012 8:20 a.m., ALAA MURAD wrote:
Dear Amos ,

Thanks so much for your help, I hope this changes makes more sense. I
have been following your comments and I feel squid is running better
than before.


The setting now feels more cleaner and squid out less errors, but
mainly now I'm suffering from one thing, sometimes , I get this error
:
clientNegotiateSSL: Error negotiating SSL connection on FD 355:
WSAENOTCONN, Socket is not connected. (10057)
clientNegotiateSSL: Error negotiating SSL connection on FD 356:
Resource temporarily unavailable (11)

The TCP connection "FD 355" is closing before SSL details can be negotiated between Squid and the client. The TCP connection "FD 356" is doing negotiation but was unsuccessful due to "resource unavailable" SSL error. Probably broken TCP again, or SSL security system failing to access things.

Are you able to identify whether those are connections between the peers? or from outside clients?


Both servers are connected peer-to-peer (back-to-back) and a running
ping confirms connection is up all the time!

One thing to note about the above problem, that this mainly causing
errors in IE , saying that the certificate has expired (this is really
random and not sure what is wrong with it, waiting few minutes
certificate is OK again ! ).


Other than that, it's perfect !


Also I did those :

1- I have removed all http (port : 80 ) as this reverse proxy running
only on SSL. Also do I need all this Safe_Ports for a site that only
serves port 433 ?!

Safe_ports is optional for a reverse-proxy. It is only relevant to forward-proxy and management port traffic. if you chose to do the management port, IF you choose to use one (the management requests can be done over accel ports too).

2- Removed cache&  unwanted logs .
3- Still confused with defaultsite=www.eservices.mycompany.com , I
kinda got the point , but not sure what is the perfect alternative
(removing it caused a header error in browser ).

The vhost option tells Squid to use the client Host: header to identify the domain. defaultsite= is a backup for use if the client did not send Host: information at all.

4- Also my rule "http_access allow all" I guess that is needed in
reverse proxy as I want to allow all to hit my site.

No. Allowing your site is all that is needed.
You earlier had "http_access allow mycompanyserver" which was doing the correct thing, and doing it before the Safe-ports and surrounding access controls. That (at the top) was the right way to do reverse-proxy access controls, so the forward-proxy ones do not get a chance to slow down the requests.

"allow all" has the effect of passing all requests on to the backend server. By only allowing your site requests to go back to the server Squid can protect against DDoS with randomised domain names pointing at you.


5- The redirector, mostly output blanks (99%) , and in rare event it
intercept and rewrite the url.

Okay. good.
  for concurrent rewrite isn't why we can
load many helpers and that will help in concurrency? I'm good in
Threading in Java, but what I'm afraid of is to confuse squid , when
printing stuff out of order in multi-threaded application.

The concurrency works by Squid sending each request with a channel-ID token. The redirector can send its reply back at any time with the same ID token and Squid does not get confused. This saves memory and CPU running many helper processes, when a few multi-threaded ones can be used. Even without multi-threading it raises the amount of requests each helper can service, saving memory and user annoyance.

6-Not sure about this as it's a windows server , "refresh_pattern -i
(/cgi-bin/|\?)   0   0%  0"

Windows has nothing to do with refresh_pattern. This is a pattern in the URL indicating a dynamic script is handling it. You can add \.asp|\.php|\.chm etc there too if you know a script is not generating correct cache-controls and also not using '?' in the URI.

7-finally , I wasn't able to get squid to work without
'sslflags=DONT_VERIFY_PEER"' , what is the impact ?

Without it Squid attempts to validate the peer SSL certificate against the root CA Squid (via the openssl library) trusts. Squid is not able to perform a certificate popup like browsers do. So is forced to abort the connection setup whenever validation fails.

If you have self-signed the backend server certificate you can use that flag, or you may be able to import your self-created root CA into the set trusted by Squid. See openssl library documentation for how to do that, it may or may not be possible and I have never tried it myself.



Once again, thanks for your help. You have no idea how much this is
helping me :)

You are welcome.


Best Regards,
Alaa Murad

----------------------------------------------------------------------------------------

https_port 443 cert=C:/Interceptor/cert/baj.cert
key=C:/Interceptor/cert/baj.key
defaultsite=www.eservices.mycompany.com
cache_peer 192.168.1.2 parent 443 0 no-query no-digest originserver
ssl  sslflags=DONT_VERIFY_PEER name=bajsite
redirect_children 25
redirect_rewrites_host_header on
redirect_program C:/java/bin/java.exe
-Djava.util.logging.config.file=C:/Interceptor/redirector/RedirectorLogging.properties
-jar C:/Interceptor/redirector/Redirector.jar
logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st %Ss:%Sh
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st
"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
access_log c:\squid\var\logs\access.log common
cache_log c:\squid\var\logs\cache.log
cache_store_log none
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern .               0       20%     4320
acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1
acl to_localhost dst 127.0.0.0/32
acl SSL_ports port 443          # https
acl SSL_ports port 563          # snews
acl SSL_ports port 873          # rsync
acl Safe_ports port 80          # http
acl Safe_ports port 81          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl Safe_ports port 631         # cups
acl Safe_ports port 873         # rsync
acl Safe_ports port 901         # SWAT
acl Safe_ports port 8080
acl CONNECT method CONNECT
http_access allow to_localhost

to_localhost is designed to catch attacking requests (requests designed to loop inside Squid until all TCP ports are consumed). It shoudl be used in a deny line.

Yours is also missing the "0.0.0.0/32" IP pattern.

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow all
http_reply_access allow all
coredump_dir /var/spool/squid
forwarded_for on
redirect_rewrites_host_header on
buffered_logs on
never_direct allow all
cache deny all


Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux