Search squid archive

Problem using ASP Website(sast.gov.in)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear All,

We are running Squid as a Transparent Proxy( 3.3.8 ) compiled from source.

We are facing a very weird and peculiar issue which has become
extremely painful to identify and resolve. The website sast.gov.in is
accessible and usable without the proxy. Only when we introduce the
proxy in the network, about 10-20% of the computers will face problems
using the website.

There is no pattern to this & it's not the same computers where the
usability stops working making it difficult to pin point where the
problem lies. The issue is completely intermittent. Other computers in
the network will be successfully able to access & use the website
without any problems.

These computers where we experience this problem are able to
successfully access and use other websites on the Internet.

When I say using the website, they will be able to successfully access
& load the website, however, the login will timeout or the
submit/login button will simply freeze.

Below is the squid.conf for your reference,

httpd_suppress_version_string on
via off
forwarded_for delete
acl lan src 192.168.1.0/24
http_access allow lan
acl SSL_ports port 443
acl Safe_ports port 80        # http
acl Safe_ports port 21        # ftp
acl Safe_ports port 443        # https
acl Safe_ports port 70        # gopher
acl Safe_ports port 210        # wais
acl Safe_ports port 1025-65535    # unregistered ports
acl Safe_ports port 280        # http-mgmt
acl Safe_ports port 488        # gss-http
acl Safe_ports port 591        # filemaker
acl Safe_ports port 777        # multiling http
acl CONNECT method CONNECT
strip_query_terms off
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all
http_port 3128 transparent
cache_dir ufs /usr/local/squid/var/spool 7000 16 256
coredump_dir /usr/local/squid/var/coredumps
refresh_pattern ^ftp:        1440    20%    10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
refresh_pattern .        0    20%    4320
#url_rewrite_program /usr/local/bin/squidGuard

There are not errors in the access.log or cache.log. However, on
computers where the login timeouts or just freezes I've noticed in the
access.log that there is only one entry for POST whereas the next GET
does not show up at all. I believe that's quite obvious because the
login did not happen & the next page did not load.

If you can point me to how I could debug this at a more granular level
wherein even if assuming that the problem is the website (sast.gov.in)
itself & not Squid , I could send/provide useful information to the
webmasters of the website in order to get this problem solved.

On a side note, we have the exact same proxy version & configuration
running at another location( different ISP), users in that setup are
also constantly accessing  & using the same website (sast.gov.in), we
have received or seen a single complaint so far. We have checked for
any kind of network/internet anomalies, however, found none. Baffling
!

Any inputs will be much appreciated.




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux