To answer your query-
* those 'xx' are different numbers, or
* the line was logged by another Squid process (with different config), or
* the config file you think is being used actually is not.
<<NILESH>> xx is just used to hide the ip subnet over mail. Also ip is same it didn't change.
=======================================================
I notice that this config tells your Squid to listen on port 8080 and
pass all its traffic through a peer at 10.xx.xx.108 which also listens
on port 8080.
Is that log being produced by that other peer?
<<NILESH>> we have proxysetup as EndUser PC >> Linux Proxy >> Windows Proxy >> Internet gateway.
Log which I captured are from Linux proxy server.
==============================================
Is there anything, any non-# lines at all, in your config besides what
your first post contained? even if you dont think its relevant.
<<NILESH>> here is the compete squid.conf for your reference-
#
# Recommended minimum configuration:
#### AD SSO Integration #####
#auth_param negotiate program /usr/lib64/squid/squid_kerb_auth -d -s GSS_C_NO_NAME
auth_param negotiate program /usr/lib64/squid/squid_kerb_auth -s HTTP/proxy02.CUST.IN@xxxxxxx
auth_param negotiate children 20
auth_param negotiate keep_alive on
acl ad_auth proxy_auth REQUIRED
#### AD Group membership ####
external_acl_type AD_Group ttl=300 negative_ttl=0 children=10 %LOGIN /usr/lib64/squid/squid_ldap_group -P -R -b "DC=CUST,DC=IN" -D svcproxy -W /etc/squid/pswd/pswd -f "(&(objectclass=person)(userPrincipalName=%v)(memberof=cn=%a,ou=InternetAccess,ou=Groups,dc=cust,dc=in))" -h CUST.IN -s sub -v 3
acl AVWSUS external AD_Group lgOnlineUpdate
acl windowsupdate dstdomain "/etc/squid/sitelist/infra_update_site"
acl custUSER external AD_Group lgInternetAccess_custUsers
acl custallowedsite dstdomain "/etc/squid/sitelist/cust_allowed_site"
#acl SHAVLIK external AD_Group lgShavlikUpdate
acl shavlikupdate dstdomain "/etc/squid/sitelist/shavlik_update_site"
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl AVSRVR src 10.50.2.107 # Cloud SEPM Servr
acl SHAVLIK_SRVR src 10.50.2.112 # Shavlik Server(NTLM Only Access)
acl IWCCP01 src 10.55.15.103 # Application access to Worldpay/bottomline Payment test site.
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
#
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny requests to certain unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
#http_access allow test shavlikupdate
http_access allow SHAVLIK_SRVR shavlikupdate
http_access allow AVSRVR windowsupdate
http_access allow AVWSUS windowsupdate
http_access allow IWCCP01
#http_access allow IWCCP01 custallowedsite
http_access allow custUSER custallowedsite
http_access allow ad_auth
# And finally deny all other access to this proxy
http_access deny all
# Squid normally listens to port 3128
http_port 8080
never_direct allow all
cache_peer 10.50.2.108 parent 8080 0 default
dns_nameservers 10.50.2.108
# We recommend you to use at least the following line.
#hierarchy_stoplist cgi-bin ?
# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 10240 16 256
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid
# Log forwarding to SysLog
#access_log syslog:local1.info ####Sachin P.####
#access_log syslog:local1.info squid
access_log /var/log/squid/access.log
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
=============================================================
Date: Thu, 6 Oct 2016 02:23:23 +1300
From: Amos Jeffries <squid3@xxxxxxxxxxxxx>
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: Squid - AD kerberos auth and Linux Server
proxy access not working
Message-ID: <cc644993-3880-ece2-1369-942daa9b03c6@xxxxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8
On 5/10/2016 7:00 a.m., Nilesh Gavali wrote:
> Hi Amos;
> Ok, we can discussed the issue in Two part 1. For Windows AD
> Authentication & SSO and 2. Linux server unable to access via squid proxy.
>
> For First point-
> Requirement to have SSO for accessing internet via squid proxy and based
> on user's AD group membership allow access to specific sites only. I
> believe current configuration of squid is working as expected.
>
> For Second point -
> Point I would like to highlight here is, the Linux server IWCCP01 is not
> part of domain at all. Hence the below error as squid configured for
> AD_auth. So how can we allow Linux server or non domain machine to access
> specific sites?
>
>> Error 407 is "proxy auth required", so the proxy is expecting
> authentication
>> for some reason.
> ====================================
> > Can you confirm that the hostname vseries-test.bottomline.com is
> contained in
>> your site file /etc/squid/sitelist/dbs_allowed_site ?
>
> YES, we have entry as .bottomline.com , which work fine when access via
> windows machine having proxy enabled for that user.
> ==============================
>> Can you temporarily change the line "http_access allow IWCCP01
> allowedsite" to
>> "http_access allow IWCCP01" and see whether the machine then gets
> access?
>
> I made the changes as suggested but still it is giving same Error 407.
Meaning that is the ACL which is broken.
> ========================================
> If that works, please list the output of the command:
> grep "bottomline.com" /etc/squid/sitelist/dbs_allowed_site
>
> o/p of above command as below -
>
> [root@Proxy02 ~]# grep "bottomline.com"
> /etc/squid/sitelist/dbs_allowed_site
> .bottomline.com
> [root@Proxy02 ~]#
Okay great. Your allowedsite has a correct entry to match the test request.
Since IWCCP01 contains exactly one IP address for the server
> acl IWCCP01 src 10.xx.15.103
it means your server is not using that IP address when it contacts Squid.
BUT that IP is what gots logged as the client/src IP.
> 1475518342.279 0 10.xx.15.103 TCP_DENIED/407 3589 CONNECT
vseries-test.bottomline.com:443 - NONE/- text/html
Strange. Unless:
* those 'xx' are different numbers, or
* the line was logged by another Squid process (with different config), or
* the config file you think is being used actually is not.
I notice that this config tells your Squid to listen on port 8080 and
pass all its traffic through a peer at 10.xx.xx.108 which also listens
on port 8080.
Is that log being produced by that other peer?
Is there anything, any non-# lines at all, in your config besides what
your first post contained? even if you dont think its relevant.
Amos
------------------------------
Message: 4
Date: Thu, 6 Oct 2016 02:45:49 +1300
From: Amos Jeffries <squid3@xxxxxxxxxxxxx>
To: Hardik Dangar <hardikdangar+squid@xxxxxxxxx>
Cc: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: Caching http google deb files
Message-ID: <89f9840b-7ec7-2f0b-a81c-5376c344878e@xxxxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8
On 5/10/2016 11:27 p.m., Hardik Dangar wrote:
> Hey Amos,
>
> I have implemented your patch at
>
> and added following to my squid.conf
> archive_mode allow all
>
> and my refresh pattern is,
> refresh_pattern dl-ssl.google.com/.*\.(deb|zip|tar|rpm) 129600 100% 129600
> ignore-reload ignore-no-store override-expire override-lastmod ignor$
>
> but i am still not able to cache it, can you tell from below output what
> would be the problem ? Do i need to configure anything extra ?
Sorry. I was a bit tired when I wrote earlier an steered you wrong.
The archive patch will ony help for things which can be cached in the
first place. Vary:* is not part of that set so this wont help you at all.
That leaves you with the option of using a multi-level cache hierarchy
where the frontend cache removes the header (causing the backend cache /
client to try and store it.
Or removing all of Squids Vary header support. I really dont recommend
either approach.
Amos
------------------------------
Message: 5
Date: Thu, 6 Oct 2016 03:05:59 +1300
From: Amos Jeffries <squid3@xxxxxxxxxxxxx>
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: Multiple auth schemes in a single Squid
instance
Message-ID: <8fa245d1-ed36-421c-7bb9-f95975d14d44@xxxxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8
On 6/10/2016 12:09 a.m., john jacob wrote:
> Hi All,
>
> We have a requirement to use the same Squid instance for Basic and NTLM
> authentication to serve various customer groups (may not be on different
> network sections). The customer groups which are using Basic authentication
> (for legacy reasons) should not receive NTLM scheme and the customer groups
> which use NTLM should not receive Basic scheme.
You seem to be implying that Basic auth is somehow worse than NTLM. In
fact NTLM is the least secure of the two by a thin line. Both are almost
equally bad to use anytime in the past decade.
You should really be considering both those to be nasty legacy and
moving on to Negotiate/Kerberos as much as possible.
> I couldn't find a way to
> implement this using the existing Squid 4.x config options. So I am
> thinking of introducing a new config parameter called "endpoints" like
> below.
>
> auth_param basic endpoints ipofBasic portofBasic # Default is "endpoints
> all"
>
> auth_param ntlm endpoints ipofNTLM portofNTLM # Default is "endpoints all"
>
> acl ipofBasic localip 192.168.4.2
> acl portofBasic localport 3129 3139
>
> acl ipofNTLM ipofNTLM 192.168.4.2
> acl portofNTLMlocalport 3149 3159
>
>
> The idea is ,if Squid recieves a request on an endpoint on which only basic
> authentication is needed (ie 192.168.4.2:3129 and192.168.4.2:3139), NTLM
> will not be presented to the client/browser. Vice versa for NTLM. If no
> endpoints is configured , then the existing behavior will be applied.
>
> Do you think this is reasonable and is there are any obvious problems with
> this?. If you find this useful, I am happy to contribute back when I finish
> implementing this module (I haven't yet started developing).
The HTTP framework is negotiated thusly:
the proxy offers what it supports,
the client tries the most secure credential type it has access to,
the proxy says whether that is acceptible or to try again.
.. repeat as necessary until either a success or no more credentials
are known - in which case ask the user with popup(s).
When that framework is used properly the clients with NTLM will try that
and the ones without will try Basic.
Squid-3.5 and later have the "auth_param ... key_extras ..." option that
can take extra parameters for the auth helper to use when it decides if
the credentials are valid.
I suggest you try making your self a script that takes the client IP as
one of those extra parameters; returning ERR if the IP is not allowed to
use the type of auth or relays the lookup on to your real auth helper if
it is allowed.
Amos
------------------------------
Message: 6
Date: Wed, 5 Oct 2016 08:19:52 -0600
From: Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx>
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: Bug: Missing MemObject::storeId
Message-ID:
<010b9e54-d853-6165-e00f-0266a3f71677@xxxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8
On 10/05/2016 06:28 AM, amaury@xxxxxx wrote:
> I'm using squid-3.5.21-20160908-r14081 and for the first time I
> have configuration squid-smp (4workers and cache_dir rock).
> 2016/10/05 14:12:55 kid4| Bug: Missing MemObject::storeId value
> Is it a misconfiguration ?
It is a known bug: http://bugs.squid-cache.org/show_bug.cgi?id=4527
I recommend updating that bug report with your configuration details,
such as the fact that you are not using ICP (AFAICT). The bug also has
some suggestions for triaging this problem further.
The existence of that bug does not imply that your configuration is
correct, but this bug is not a [known] sign of a misconfiguration.
Alex.
------------------------------
Message: 7
Date: Wed, 5 Oct 2016 20:03:47 +0530
From: Hardik Dangar <hardikdangar+squid@xxxxxxxxx>
To: Amos Jeffries <squid3@xxxxxxxxxxxxx>
Cc: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: Caching http google deb files
Message-ID:
<CA+sSnVYCy5E00jKK5cPZm3q+eBX8Fx=Mjs_iu4Xs0oebxcte9Q@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"
Hey Amos,
oh, i actually built archive mode squid by getting help at here,
http://bugs.squid-cache.org/show_bug.cgi?id=4604
I was thinking if we have option vary_mode just like archive mode to set it
for particular domain like,
acl dlsslgoogle srcdomain dl-ssl.google.com
vary_mode allow dlsslgoogle
Above could work one of the following way,
1) We replace Vary header for srcdomain to some suitable option so request
can be cached
2) This will remove vary header totally for the above domain.
3) above will use matching squid refresh pattern for srcdomain and only
cache requests for
particular type of file given in refresh_pattern
What do you think would be easiar ? and how do i work on squid source to do
above? any hint is appreciated.
One more thing can you tell me if we are already violating http via options
like nocache, ignore-no-store ignore-private ignore-reload, why can't we do
the same for Vary header ?
It seems servers that are notorious have Vary * header as well as at times
(github) no Last modified header and these are the biggest bandwidth eaters.
Thanks.
Hardik
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20161005/df333a18/attachment.html>
------------------------------
Subject: Digest Footer
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
------------------------------
End of squid-users Digest, Vol 26, Issue 22
*******************************************
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
_______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users