Search squid archive

Re: Squid - AD kerberos auth and Linux Server proxy access not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi AMos;
Thanks for clarification, It is working as expected now... Appreciate your support....

Thanks again.

Thanks & Regards
Nilesh Suresh Gavali


-----Forwarded by Nilesh Gavali/MUM/TCS on 10/07/2016 04:52PM -----
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
From: squid-users-request@xxxxxxxxxxxxxxxxxxxxx
Sent by: "squid-users"
Date: 10/06/2016 12:44AM
Subject: squid-users Digest, Vol 26, Issue 25

Send squid-users mailing list submissions to
squid-users@xxxxxxxxxxxxxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.squid-cache.org/listinfo/squid-users
or, via email, send a message with subject or body 'help' to
squid-users-request@xxxxxxxxxxxxxxxxxxxxx

You can reach the person managing the list at
squid-users-owner@xxxxxxxxxxxxxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of squid-users digest..."


Today's Topics:

   1. Re: Squid - AD kerberos auth and Linux Server proxy access
      not working (Amos Jeffries)
   2. Re: Caching http google deb files (Hardik Dangar)
   3. Re: intercept + IPv6 + IPFilter 5.1 (Egerváry Gergely)
   4. Re: Caching http google deb files (Antony Stone)


----------------------------------------------------------------------

Message: 1
Date: Thu, 6 Oct 2016 06:03:09 +1300
From: Amos Jeffries <squid3@xxxxxxxxxxxxx>
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [squid-users] Squid - AD kerberos auth and Linux Server
proxy access not working
Message-ID: <4e608076-5ca4-cdac-a5e4-6d0af5106f1d@xxxxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8

On 6/10/2016 5:31 a.m., Nilesh Gavali wrote:
> <<NILESH>> here is the compete squid.conf for your reference-
>
> #
> # Recommended minimum configuration:
> ####  AD SSO Integration  #####
> #auth_param negotiate program /usr/lib64/squid/squid_kerb_auth -d -s
> GSS_C_NO_NAME
> auth_param negotiate program /usr/lib64/squid/squid_kerb_auth -s
> HTTP/proxy02.CUST.IN@xxxxxxx
> auth_param negotiate children 20
> auth_param negotiate keep_alive on
>
> acl ad_auth proxy_auth REQUIRED
>
> ####  AD Group membership  ####
>
>
> external_acl_type AD_Group ttl=300 negative_ttl=0 children=10 %LOGIN
> /usr/lib64/squid/squid_ldap_group -P -R -b "DC=CUST,DC=IN" -D svcproxy -W
> /etc/squid/pswd/pswd -f
> "(&(objectclass=person)(userPrincipalName=%v)(memberof=cn=%a,ou=InternetAccess,ou=Groups,dc=cust,dc=in))"
> -h CUST.IN -s sub -v 3
>
> acl AVWSUS external AD_Group lgOnlineUpdate
> acl windowsupdate dstdomain "/etc/squid/sitelist/infra_update_site"
>
> acl custUSER external AD_Group lgInternetAccess_custUsers
> acl custallowedsite dstdomain "/etc/squid/sitelist/cust_allowed_site"
>
> #acl SHAVLIK external AD_Group lgShavlikUpdate
> acl shavlikupdate dstdomain "/etc/squid/sitelist/shavlik_update_site"
>
<snip defaults>

> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl AVSRVR src 10.50.2.107      # Cloud SEPM Servr
> acl SHAVLIK_SRVR src 10.50.2.112     # Shavlik Server(NTLM Only Access)
> acl IWCCP01 src 10.55.15.103   # Application access to Worldpay/bottomline
> Payment test site.

<snip defaults>
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
>
> #http_access allow test shavlikupdate
> http_access allow SHAVLIK_SRVR shavlikupdate
> http_access allow AVSRVR windowsupdate

The "AVWSUS" ACL below requires authentication in order to check the
group membership. That is what triggers the 407 response to happen.

Move the IWCCP01 line up to here and it should stop.

To make your configuration clearer about which lines need auth and which
lines do not you could place the following line right here:

 http_access deny !ad_auth

All things that do need auth or group names should always go below it.
Things that need to avoid auth should always go above it.


> http_access allow AVWSUS windowsupdate
> http_access allow IWCCP01
> #http_access allow IWCCP01 custallowedsite
> http_access allow custUSER custallowedsite
> http_access allow ad_auth
> # And finally deny all other access to this proxy
> http_access deny all
>

Amos



------------------------------

Message: 2
Date: Thu, 6 Oct 2016 00:10:46 +0530
From: Hardik Dangar <hardikdangar+squid@xxxxxxxxx>
To: Jok Thuau <jok@xxxxxxxxxx>
Cc: Squid Users <squid-users@xxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Caching http google deb files
Message-ID:
<CA+sSnVZ+6csWqt60nwwSM0QDSmx+DutqQeXgL-bGtYbFC6WRuw@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"

Hey Jok,

Thanks for the suggetion but the big issue with that is i have to download
whole repository about ( 80-120 GB ) first and then each week i need to
download 20 to 25 GB.  We hardly use any of that except few popular repos.
big issue i always have with most of them is third party repo's.
squid-deb-proxy is quite reliable but again its squid with custom config
nothing else and it fails to cache google debs.

Squid is perfect for me because it can cache things which is requested
first time. So next time anybody requests it it's ready. The problem lies
when big companies like google and github does not wants us to cache their
content and puts various tricks so we can't do that. My issue is same
google deb files are downloaded 50 times in same day as apt updates happen
and i waste 100s of gb into same content. Country where i live bandwidth is
very very costly matter and fast connections are very costly. So this is
important for me.

@Amos,

I think it's about time Squid needs update of code which can cache use
cases like difficult to handle google and github. I am interested to create
proposal and will soon share at squid dev and ask for ideas and will try to
get official approval so i can build this according to squid standards.

but before that can you help me with few things.essentially i don't have
much experience with C code. as i have worked most of my life with
php,python and _javascript_ side. I do know how to write C code but i am not
an expert at it. So i want to know if there is any pattern squid follows
except the oop pattern. I also want to know workflow of squid i.e. what
happens when it receives request and how acls are applied programmatically
and how refresh patterns are applied. is there a way i can debug and check
if refresh patterns are applied for given url. as well as
reply_header_replace has replaced header if i can see those lines in debug
it will help me with this. i know debug options can help me but if i turn
it with level 9 it is very difficult to go past so many debug entries.

My idea is to develop a module which will not change any of the squid code
but will be loaded only if its called explicitly within squid config. So i
want to know is there any piece of code available within squid which
behaves similarly just like your archive mode.




On Wed, Oct 5, 2016 at 9:49 PM, Jok Thuau <jok@xxxxxxxxxx> wrote:

> This is sort of off-topic, but have you considered using a deb repo
> mirroring software?
> (it would mean that you need to update your clients to point to that
> rather than google, but that's not really difficult).
> software like aptly (aptly.info) are really good about this (though a
> little hard to get going in the first place). or a deb-caching proxy
> (apt-cacher-ng? squid-deb-proxy?)
>
>
> On Tue, Oct 4, 2016 at 7:30 AM, Hardik Dangar <
> hardikdangar+squid@xxxxxxxxx> wrote:
>
>> Wow, i couldn't think about that. google might need tracking data that
>> could be the reason they have blindly put vary * header. oh Irony, company
>> which talks to all of us on how to deliver content is trying to do such
>> thing.
>>
>> I have looked at your patch but how do i enable that ? do i need to write
>> custom ACL ? i know i need to compile and reinstall after applying patch
>> but what do i need to do exactly in squid.conf file as looking at your
>> patch i am guessing i need to write archive acl or i am too naive to
>> understand C code :)
>>
>> Also
>>
>> reply_header_replace is any good for this ?
>>
>>
>> On Tue, Oct 4, 2016 at 7:47 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx>
>> wrote:
>>
>>> On 5/10/2016 2:34 a.m., Hardik Dangar wrote:
>>> > Hey Amos,
>>> >
>>> > We have about 50 clients which downloads same google chrome update
>>> every 2
>>> > or 3 days means 2.4 gb. although response says vary but requested file
>>> is
>>> > same and all is downloaded via apt update.
>>> >
>>> > Is there any option just like ignore-no-store? I know i am asking for
>>> too
>>> > much but it seems very silly on google's part that they are sending
>>> very
>>> > header at a place where they shouldn't as no matter how you access
>>> those
>>> > url's you are only going to get those deb files.
>>>
>>>
>>> Some things G does only make sense whan you ignore all the PR about
>>> wanting to make the web more efficient and consider it's a company whose
>>> income is derived by recording data about peoples habits and activities.
>>> Caching can hide that info from them.
>>>
>>> >
>>> > can i hack squid source code to ignore very header ?
>>> >
>>>
>>> Google are explicitly saying the response changes. I suspect there is
>>> something involving Google account data being embeded in some of the
>>> downloads. For tracking, etc.
>>>
>>>
>>> If you are wanting to test it I have added a patch to
>>> <http://bugs.squid-cache.org/show_bug.cgi?id=4604> that should implement
>>> archival of responses where the ACLs match. It is completely untested by
>>> me beyond building, so YMMV.
>>>
>>> Amos
>>>
>>>
>>
>> _______________________________________________
>> squid-users mailing list
>> squid-users@xxxxxxxxxxxxxxxxxxxxx
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20161006/98c8f171/attachment-0001.html>

------------------------------

Message: 3
Date: Wed, 5 Oct 2016 20:49:54 +0200
From: Egerváry Gergely <gergely@xxxxxxxxxxx>
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [squid-users] intercept + IPv6 + IPFilter 5.1
Message-ID: <57F54B52.4000803@xxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8

>> Should "intercept" work with IPv6 on NetBSD 7-STABLE and IPFilter 5.1?

Okay, we have "fixed" Squid interception, and IPFilter in the kernel,
and now it's working good. But did we do it in the right way?

While reading ip_nat.c in IPFilter, I found that SIOCGNATL - and its
function called ipf_nat_lookupredir() - is a frontend to two functions:
ipf_nat_inlookup() and ipf_nat_outlookup().

We are now calling SIOCGNATL to use ipf_nat_outlookup(). But should not
we call it to use ipf_nat_inlookup() instead?

In Squid, we are working with 3 different addresses:
- source IP:port of the connection (browser client)
- real destination IP:port (the target web server)
- interception destination IP:port (Squid itself)

In IPFilter, the terminology is different: "real" refers to the
original source, not the original destination.

In my understanding, on redirect (RDR) rules, where we know the
original source address and the rewrited destination address, we should
use ipf_nat_inlookup() to get the original destination address.

ipf_nat_outlookup() should be used on source-NAT (MAP) scenarios,
what we don't need for Squid.

If that's true, IPFilter was correct - we have to revert our IPFilter
patches - and modify Intercept.cc instead.

See IPFilter source code comments below:

========
Function: ipf_nat_inlookup
Returns: nat_t* - NULL == no match, else pointer to matching NAT entry
Parameters:
fin(I) - pointer to packet information
flags(I) - NAT flags for this packet
p(I) - protocol for this packet
src(I) - source IP address
mapdst(I) - destination IP address

Lookup a nat entry based on the mapped destination ip address/port
and real source address/port. We use this lookup when receiving a
packet, we're looking for a table entry, based on the destination
address.

========
Function: ipf_nat_outlookup
Returns: nat_t* - NULL == no match, else pointer to matching NAT entry
Parameters:
fin(I) - pointer to packet information
flags(I) - NAT flags for this packet
p(I) - protocol for this packet
src(I) - source IP address
dst(I) - destination IP address
rw(I) - 1 == write lock on held, 0 == read lock.

Lookup a nat entry based on the source 'real' ip address/port
and destination address/port. We use this lookup when sending a packet
out, we're looking for a table entry, based on the source address.

========

See full ip_nat.c source code here:

http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/external/bsd/ipf/netinet/ip_nat.c?rev=1.16&content-type=text/x-cvsweb-markup

Thank you,
--
Gergely EGERVARY



------------------------------

Message: 4
Date: Wed, 5 Oct 2016 21:13:21 +0200
From: Antony Stone <Antony.Stone@xxxxxxxxxxxxxxxxxxxx>
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: Caching http google deb files
Message-ID: <201610052113.21686.Antony.Stone@xxxxxxxxxxxxxxxxxxxx>
Content-Type: Text/Plain;  charset="iso-8859-15"

On Wednesday 05 October 2016 at 20:40:46, Hardik Dangar wrote:

> Hey Jok,
>
> Thanks for the suggetion but the big issue with that is i have to download
> whole repository about ( 80-120 GB ) first and then each week i need to
> download 20 to 25 GB.

This is not true for apt-cacher-ng.  You install it and it does nothing.  You
point your Debian (or Ubuntu, maybe other Debian-derived distros as well, I
haven't tested) machines at it as their APT proxy, and it then caches content
as it gets requested and downloaded.  Each machine which requests a new
package causes that package to get cached.  Each machine which requests a
cached package gets the local copy (unless it's been updated, in which case
the cache gets updated).

> We hardly use any of that except few popular repos.
> big issue i always have with most of them is third party repo's.
> squid-deb-proxy is quite reliable but again its squid with custom config
> nothing else and it fails to cache google debs.
>
> Squid is perfect for me because it can cache things which is requested
> first time. So next time anybody requests it it's ready.

This is exactly how apt-cacher-ng works.  I use it myself and I would
recommend you investigate it further for this purpose.

> The problem lies when big companies like google and github does not wants us
> to cache their content and puts various tricks so we can't do that.

That's a strange concept for a Debian repository (even third-party).

Are you sure you're talking about repositories and not just isolated .deb
files?


Antony.

--
A user interface is like a joke.
If you have to explain it, it didn't work.

                                                   Please reply to the list;
                                                         please *don't* CC me.


------------------------------

Subject: Digest Footer

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users


------------------------------

End of squid-users Digest, Vol 26, Issue 25
*******************************************

=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux