Search squid archive

Re: Facebook page very slow to respond

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Wilson Hernandez
849-214-8030
www.figureo56.com
www.optimumwireless.com


On 10/10/2011 9:54 PM, Wilson Hernandez wrote:
Amos.

Made the changes you suggested on this post.


On 10/8/2011 11:24 PM, Amos Jeffries wrote:
On 09/10/11 09:15, Wilson Hernandez wrote:
> I disabled squid and I'm doing simple FORWARDING and things work, this
> tells me that I'm having a configuration issue with squid 3.1.14.
>
> Now, I can't afford to run our network without squid since we are also
> running SquidGuard for disabling some websites to certain users.
>
> Here's part of my squid.conf:
>
> # Port Squid listens on
> http_port 172.16.0.1:3128 intercept disable-pmtu-discovery=off
>
> error_default_language es-do
>
> # Access-lists (ACLs) will permit or deny hosts to access the proxy
> acl lan-access src 172.16.0.0/16
> acl localhost src 127.0.0.1
> acl localnet src 172.16.0.0/16
> acl proxy src 172.16.0.1
> acl clientes_registrados src "/etc/msd/ipAllowed"
>
> # acl adstoblock dstdomain "/etc/squid/blockAds"
>
> acl CONNECT method CONNECT
>
<snip>
>
> http_access allow proxy
> http_access allow localhost
>
> #---- Block some sites
>
> acl blockanalysis01 dstdomain .scorecardresearch.com clkads.com
> acl blockads01 dstdomain .rad.msn.com ads1.msn.com ads2.msn.com
> ads3.msn.com ads4.msn.com
> acl blockads02 dstdomain .adserver.yahoo.com ad.yieldmanager.com
> acl blockads03 dstdomain .doubleclick.net .fastclick.net
> acl blockads04 dstdomain .ero-advertising.com .adsomega.com
> acl blockads05 dstdomain .adyieldmanager.com .yieldmanager.com
> .adyieldmanager.net .yieldmanager.net
> acl blockads06 dstdomain .e-planning.net .super-publicidad.com
> .super-publicidad.net
> acl blockads07 dstdomain .adbrite.com .contextweb.com .adbasket.net
> .clicktale.net
> acl blockads08 dstdomain .adserver.com .adv-adserver.com
> .zerobypass.info .zerobypass.com
> acl blockads09 dstdomain .ads.ak.facebook.com .pubmatic.com .baynote.net
> .publicbt.com

Optimization tip:
These ACLs are the same as far as Squid is concerned. You are also using them the same way at the same time below. So the best thing to do is drop those 01,02,03 numbers and have all the blocked domains in one ACL name.

Then the below testing can be reduced to a single:
   http_access deny blockads


Changed all these to:

acl blockads dstdomain .rad.msn.com ads1.msn.com ads2.msn.com ads3.msn.com ads4.msn.com
acl blockads            dstdomain .adserver.yahoo.com
acl blockads            dstdomain .doubleclick.net .fastclick.net
acl blockads            dstdomain .ero-advertising.com .adsomega.com
acl blockads dstdomain .adyieldmanager.com .yieldmanager.com .adyieldmanager.net .yieldmanager.net acl blockads dstdomain .e-planning.net .super-publicidad.com .super-publicidad.net acl blockads dstdomain .adbrite.com .contextweb.com .adbasket.net .clicktale.net acl blockads dstdomain .adserver.com .adv-adserver.com .zerobypass.info .zerobypass.com acl blockads dstdomain .ads.ak.facebook.com .pubmatic.com .baynote.net .publicbt.com

http_access deny blockads




> balance_on_multiple_ip on

This erases some of the benefits from connection persistence and reuse. It is not such a great idea with 3.1+ as it was with earlier Squid.

Although you turned of connection persistence anyway below. So this only is noticable when it breaks websites depending on IP-base security.

Removed this line as suggested later...



>
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern . 0 20% 4320
>

You may as well erase all the refresh_pattern rules below. The CGI and '.' pattern rules are the last ones Squid processes.

Also deleted all the rules and left what's above.....
> visible_hostname www.optimumwireless.com
> cache_mgr optimumwireless@xxxxxxxxxxx
>

Optimum wireless. Hmm. I'm sure I've audited this config before and mentioned the same things...


You probably have..

> # TAG: store_dir_select_algorithm
> # Set this to 'round-robin' as an alternative.
> #
> #Default:
> # store_dir_select_algorithm least-load
> store_dir_select_algorithm round-robin
>
Changed this to least-load... Don't know if is better or not...


Interesting. Forcing round-robin selection between one dir. :)

>
>
> # PERSISTENT CONNECTION HANDLING
> #
> -----------------------------------------------------------------------------
>
> #
> # Also see "pconn_timeout" in the TIMEOUTS section
>
> # TAG: client_persistent_connections
> # TAG: server_persistent_connections
> # Persistent connection support for clients and servers. By
> # default, Squid uses persistent connections (when allowed)
> # with its clients and servers. You can use these options to
> # disable persistent connections with clients and/or servers.
> #
> #Default:
> client_persistent_connections off
> server_persistent_connections off
> # TAG: persistent_connection_after_error
> # With this directive the use of persistent connections after
> # HTTP errors can be disabled. Useful if you have clients
> # who fail to handle errors on persistent connections proper.
> #
> #Default:
> persistent_connection_after_error off
>

<snip settings left around their default>

>
> # TAG: pipeline_prefetch
> # To boost the performance of pipelined requests to closer
> # match that of a non-proxied environment Squid can try to fetch
> # up to two requests in parallel from a pipeline.
> #
> # Defaults to off for bandwidth management and access logging
> # reasons.
> #
> #Default:
> pipeline_prefetch on

pipelining ON without persistent connections OFF earlier. This could be the whole problem all by itself.

What it happening is that Squid accepts 2 requests (pipeline on) from the client, parses them both, services the first one from a random DNS IP (balance_on_multiple_ip on) and *closes* the connection (persistence off). The client is forced to repeat the TCP connection and second request from scratch, likely pipilining another behind that.

This is doubling the load on Squid parser (which is one of the slowest most CPU intensive processes of proxying). As well as potentially doubling the client->squid request traffic.


I recommend you remove balance_on_multiple_ip and server_persistent_connections from your config. That will enable server persistence in 3.1 in accordance with its HTTP/1.1 capabilities.

As you suggested here...
I went with
pipeline_prefetch on
client_persistent_connections on
pconn_timeout 30 seconds


Also you can try this:
  pipeline_prefetch on
  client_persistent_connections on
  pconn_timeout 30 seconds

If that client facing change causes resources problems use:
  pipeline_prefetch off
  client_persistent_connections off

BUT, be sure and please make a note of why before turning off persistence. You will want to re-check that reason periodically. Persistence enables several major performance boosters (like pipelining) in HTTP/1.1 and the problems-vs-benefits balance changes over time depending on external factors of client HTTP compliance and network hardware outside of Squid.

>
> http_access allow clientes_registrados

Um, I assume the clientes_registrados are registered IPs within LAN network?


Exactly... These are registered IPs.

As a double-check on the http_access permissions you can use squidclient to get a simple list of the access permissions:
  squidclient mgr:config | grep http_access

and check the output rules follow your required policies.

>
> shutdown_lifetime 45 seconds
>
> http_access deny all
>
>
> Wilson Hernandez
> www.figureo56.com
> www.optimumwireless.com


I did a /usr/local/squid/sbin/squid -k reconfigure

And noticed I couldn't access any pages....

I shutdown squid and now is in the process  of rebuilding it's store...

I will test this config and see how it works... Will let you know how it goes.

Thanks for taking your time in helping me resolve this problem.

Wilson

Problem still happening... Damn Facebook is still slow .


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux