Search squid archive

Re: What does this error mean?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
 


10.11.15 18:43, Patrick Flaherty пишет:
>
>
> -----Original Message-----
> From: squid-users [mailto:squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx] On
> Behalf Of squid-users-request@xxxxxxxxxxxxxxxxxxxxx
> Sent: Tuesday, November 10, 2015 5:09 AM
> To: squid-users@xxxxxxxxxxxxxxxxxxxxx
> Subject: squid-users Digest, Vol 15, Issue 26
>
> Send squid-users mailing list submissions to
>     squid-users@xxxxxxxxxxxxxxxxxxxxx
>
> To subscribe or unsubscribe via the World Wide Web, visit
>     http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
>     squid-users-request@xxxxxxxxxxxxxxxxxxxxx
>
> You can reach the person managing the list at
>     squid-users-owner@xxxxxxxxxxxxxxxxxxxxx
>
> When replying, please edit your Subject line so it is more specific than
> "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>    1. Re: What does this error mean? (Amos Jeffries)
>    2. Re: Subject: Re: authentication of every GET request from
>       part of URL? (Sreenath BH)
>    3. Re: Subject: Re: authentication of every GET request from
>       part of URL? (Amos Jeffries)
>    4. Assert, followed by shm_open() fail. (Steve Hill)
>    5. Help, long response time(2 seconds) in squid! (=?GBK?B?0OzTwL2h?=)
>    6. cache peer problem with Https only !! (Ahmad Alzaeem)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 10 Nov 2015 05:38:42 +1300
> From: Amos Jeffries <squid3@xxxxxxxxxxxxx>
> To: squid-users@xxxxxxxxxxxxxxxxxxxxx
> Subject: Re:  What does this error mean?
> Message-ID: <5640CC12.7080907@xxxxxxxxxxxxx>
> Content-Type: text/plain; charset=utf-8
>
> On 10/11/2015 1:54 a.m., Yuri Voinov wrote:
>>
>> This mean that client sent RST packet. You can ignore this error.
>>
>
> Well, its not always the client sending it. Could be a NAT device
somewhere
> hitting some timeout or connnection limit and aborting idle connections.
>
> If it is occuring a lot then it might be worth investigating. Having long
> live TCP connections just sitting around unused for very long periods
is not
> very good for resource utilization. Hardware device tables overflowing
in a
> NAT or router device is also pretty bad.
>
> Amos
>
> Hello,
>
> Thank you Yuri and Amos for your responses. The log entry points to the
> Squid IP and one of my client IP addresses. They are connect through an
> 'Internal' network using Oracle's Virtual Box. So there is nothing in
> between meaning it's the client connecting to Squid directly. So it
must be
> out client software creating a RST or the Windows Stack itself.
If your client uses Oracle Virtual Box, guest os usually uses NAT for
external access by default. This explains error.
>
>
> Thanks
> Patrick
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 9 Nov 2015 22:42:08 +0530
> From: Sreenath BH <bhsreenath@xxxxxxxxx>
> To: Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx>
> Cc: squid-users@xxxxxxxxxxxxxxxxxxxxx
> Subject: Re:  Subject: Re: authentication of every GET
>     request from part of URL?
> Message-ID:
>     <CALgKBSkUH9JUSB5-ZetZitnUYhW0Keik3QQj2rk-wsjC=mM+Bg@xxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=UTF-8
>
> Hi Alex,
>
> thanks for your detailed asnwers.
>
> Here are more details.
> 1. If the URL does not have any token, we would like to send an error
> message back to the browser/client, without doing a cache lookup, or going
> to backend apache server.
>
> 2. If the token is invalid (that is we can't find it in a database), that
> means we can not serve data. In this case we would like to send back a
HTTP
> error (something like a  401 or 404, along with a more descriptive
message)
>
> 3. If the token is valid(found), remove the token from the URL, and use
> remaining part of URL as the key to look in Squid cache.
>
> 4. If found return that data, along with proper HTTP status code.
> 5. If cache lookup fails(not cached), send HTTP request to back-end apache
> server (removing the token), get returned result, store in cache, and
return
> to client/browser.
>
> I read about ACL helper programs, and it appears I can do arbitrary
> validations in it, so should work.
> Is it correct to assume that the external ACL code runs before url
> rewriting?,
>
> Does the URL rewriter run before a cache lookup?
>
> thanks,
> Sreenath
>
> On 11/8/15, Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
>> On 11/08/2015 06:34 AM, Sreenath BH wrote:
>>
>>> Is there a way for me to invoke some custom code for every request
>>> that Squid receives?
>>
>> Yes, there are several interfaces, including a built-in ACL, an
>> external ACL helper, a URL rewriter, an eCAP/ICAP service. Roughly
>> speaking, the former ones are easier to use and the latter ones are more
> powerful.
>>
>>
>>> That script would do the following:
>>>
>>> 1. Extract part of the URL(the token) and look up in a database to
>>> see if it is valid.
>>>     If valid, proceed to lookup cached object, other wise go to
>>> back-end fetch, etc.
>>> 2. If the token is not found in database, return with an error, so
>>> that Squid can send back a not found type (some HTTP error) of
>>> response.
>>
>> If the above are your requirements, avoid the word "authentication"
>> might help. It confuses people into thinking you want something far
>> more complex.
>>
>>
>> The validation in step #1 can be done by an external ACL. However, you
>> probably forgot to mention that the found token should be removed from
>> the URL. To edit the URL, you need to use a URL rewriter or an
>> eCAP/ICAP service.
>>
>> Everything else can be done by built-in ACLs unless you need to serve
>> very custom error messages. In the latter case, you will need an eCAP
>> or ICAP service.
>>
>> However, if "go to back-end fetch" means loading response from some
>> storage external to Squid without using HTTP, then you need an eCAP or
>> ICAP service to do that fetching.
>>
>> I recommend that you clarify these parts of your specs:
>>
>> What do you want to do when the token is not found in the URL?
>>
>> What do you want to do when an invalid token is found in the URL?
>>
>> Will sending a response using a simple template filled with some basic
>> request details suffice when a valid token is not found in the database?
>>
>>
>> HTH,
>>
>> Alex.
>>
>>
>>
>>> On 7/11/2015 1:33 a.m., Sreenath BH wrote:
>>>> Hi
>>>> I am very new to Squid, and think have a strange requirement.
>>>> We want to serve cached content only if the client has been
>>>> authenticated before.
>>>> Since we don't expect the client software to send any information in
>>>> headers, we embed a token in the URL that we present to the user.
>>>>
>>>
>>> Um, you know how sending username and password in plain-text Basic
>>> auth headers is supposed to be the worst form of security around?
>>>
>>> It's not quite. Sending credentials in the URL is worse. Even if its
>>> just an encoded token.
>>>
>>> Why are you avoiding actual HTTP authentication?
>>>
>>> Why be so actively hostile to every other cache in existence?
>>>
>>>
>>>> So when the client s/w uses this URL, we want to extract the token
>>>> from URL and do a small database query to ensure that the token is
>>>> valid.
>>>>
>>>> This is in accelerator mode.
>>>> Is it possible to use something similar to basic_fake_auth and put
>>>> my code there that does some database query?
>>>
>>> The "basic_..._auth" parts of that helpers name mean that it performs
>>> HTTP Basic authentication.
>>>
>>> The "fake" part means that it does not perform any kind of validation.
>>>
>>> All of the text above has been describing how you want to perform
>>> actions which are the direct opposite of everything basic_fake_auth
does.
>>>
>>>> If the query fails, we don't return the cached content?
>>>
>>> What do you want to be delivered instead?
>>>
>>> Amos
>>> _______________________________________________
>>> squid-users mailing list
>>> squid-users@xxxxxxxxxxxxxxxxxxxxx
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>>
>>
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 10 Nov 2015 06:42:27 +1300
> From: Amos Jeffries <squid3@xxxxxxxxxxxxx>
> To: squid-users@xxxxxxxxxxxxxxxxxxxxx
> Subject: Re:  Subject: Re: authentication of every GET
>     request from part of URL?
> Message-ID: <5640DB03.8030200@xxxxxxxxxxxxx>
> Content-Type: text/plain; charset=utf-8
>
> On 10/11/2015 6:12 a.m., Sreenath BH wrote:
>> Hi Alex,
>>
>> thanks for your detailed asnwers.
>>
>> Here are more details.
>> 1. If the URL does not have any token, we would like to send an error
>> message back to the browser/client, without doing a cache lookup, or
>> going to backend apache server.
>>
>> 2. If the token is invalid (that is we can't find it in a database),
>> that means we can not serve data. In this case we would like to send
>> back a HTTP error (something like a  401 or 404, along with a more
>> descriptive message)
>>
>
> All of the above is external_acl_type helper operations.
>
>> 3. If the token is valid(found), remove the token from the URL, and
>> use remaining part of URL as the key to look in Squid cache.
>>
>> 4. If found return that data, along with proper HTTP status code.
>
> The above is url_rewrite_program operations.
>
>> 5. If cache lookup fails(not cached), send HTTP request to back-end
>> apache server (removing the token), get returned result, store in
>> cache, and return to client/browser.
>
> And that part is normal caching. Squid will do it by default.
>
> Except the "removing the token" part. Which was done at step #4
already, so
> has no relevance here at step #5.
>
>>
>> I read about ACL helper programs, and it appears I can do arbitrary
>> validations in it, so should work.
>> Is it correct to assume that the external ACL code runs before url
>> rewriting?,
>
> The http_access tests are run before re-writing.
> If the external ACL is one of those http_access tests the answer is yes.
>
>>
>> Does the URL rewriter run before a cache lookup?
>
> Yes.
>
>
>
> Although, please note that despite this workaround for your cache. It
really
> is *only* your proxy which will work nicely. Every other cache on the
planet
> will see your applications URLs are being unique and needing different
> caching slots.
>
> This not only wastes cache space for them, but also forces them to pass
> extra traffic in the form of full-object fetches at your proxy. Which
raises
> the bandwidth costs for both them and you far beyond what proper header
> based authentication or authorization would.
>
> As the other sysadmin around the world notice this unnecessarily
raised cost
> they will start to hack their configs to force-cache the responses
from your
> application. Which will bypass your protection system entirely since your
> proxy may not not even see many of the requests.
>
> The earlier you can get the application re-design underway to remove the
> credentials token from URL, the earlier the external problems and
costs will
> start to dsappear.
>
> Amos
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 9 Nov 2015 17:58:59 +0000
> From: Steve Hill <steve@xxxxxxxxxxxx>
> To: squid-users@xxxxxxxxxxxxxxxxxxxxx
> Subject:  Assert, followed by shm_open() fail.
> Message-ID: <5640DEE3.1030003@xxxxxxxxxxxx>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>
> On Squid 3.5.11 I'm seeing occasional asserts:
>
> 2015/11/09 13:45:21 kid1| assertion failed: DestinationIp.cc:41:
> "checklist->conn() && checklist->conn()->clientConnection != NULL"
>
> More concerning though, is that usually when a Squid process crashes,
it is
> automatically restarted, but following these asserts I'm often seeing:
>
> FATAL: Ipc::Mem::Segment::open failed to
> shm_open(/squidnocache-squidnocache-cf__metadata.shm): (2) No such file or
> directory
>
> After this, Squid is still running, but won't service requests and
requires
> a manual restart.
>
> Has anyone seen this before?
>
> Cheers.
>
> --
>   - Steve Hill
>     Technical Director
>     Opendium Limited     http://www.opendium.com
>
> Direct contacts:
>     Instant messager: xmpp:steve@xxxxxxxxxxxx
>     Email:            steve@xxxxxxxxxxxx
>     Phone:            sip:steve@xxxxxxxxxxxx
>
> Sales / enquiries contacts:
>     Email:            sales@xxxxxxxxxxxx
>     Phone:            +44-1792-824568 / sip:sales@xxxxxxxxxxxx
>
> Support contacts:
>     Email:            support@xxxxxxxxxxxx
>     Phone:            +44-1792-825748 / sip:support@xxxxxxxxxxxx
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: steve.vcf
> Type: text/x-vcard
> Size: 283 bytes
> Desc: not available
> URL:
>
<http://lists.squid-cache.org/pipermail/squid-users/attachments/20151109/392
> 307ce/attachment-0001.vcf>
>
> ------------------------------
>
> Message: 5
> Date: Tue, 10 Nov 2015 16:49:27 +0800
> From: "=?GBK?B?0OzTwL2h?=" <yongjianchn@xxxxxxxx>
> To: "squid-users" <squid-users@xxxxxxxxxxxxxxxxxxxxx>
> Subject:  Help, long response time(2 seconds) in squid!
> Message-ID: <20151110084927.3A98EDD8001@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=GBK
>
> Hi, All:
> I tried to use squid as a web cache server today, but when I test it with
> http_load, I found squid may have a latency of 2 seconds in some cases.
> Someone help me? Thanks!
> The test is
> -------
> http_load -parallel 1 -seconds 20 url.txt # the content in url.txt is
> `http://10.210.136.51:3128/xyj/1`
> ------
> config for squid is
> -----
> http_port 3128 accel vhost vport
> cache_peer 10.210.136.51 parent 8888 0
> # use mem only
> cache_mem 1000 MB
> -----
> The access log
> -----
> 1447142491.264     39 10.210.136.54 TCP_MEM_HIT_ABORTED/200 4165 GET
> http://10.210.136.51:3128/xyj/1 - HIER_NONE/- application/octet-stream
> 1447142491.283     37 10.210.136.54 TCP_MEM_HIT_ABORTED/200 4165 GET
> http://10.210.136.51:3128/xyj/1 - HIER_NONE/- application/octet-stream
> 1447142493.288   2023 10.210.136.54 TCP_MEM_HIT_ABORTED/200 4165 GET
> http://10.210.136.51:3128/xyj/1 - HIER_NONE/- application/octet-stream #
> 2023 ms! why?
> 1447142493.307   2023 10.210.136.54 TCP_MEM_HIT_ABORTED/200 4165 GET
> http://10.210.136.51:3128/xyj/1 - HIER_NONE/- application/octet-stream #
> 2023 ms! why?
> 1447142493.326     38 10.210.136.54 TCP_MEM_HIT_ABORTED/200 4165 GET
> http://10.210.136.51:3128/xyj/1 - HIER_NONE/- application/octet-stream
> 1447142493.348     40 10.210.136.54 TCP_MEM_HIT_ABORTED/200 4165 GET
> http://10.210.136.51:3128/xyj/1 - HIER_NONE/- application/octet-stream
> -----
>
> ------------------------------
>
> Message: 6
> Date: Tue, 10 Nov 2015 12:08:50 +0300
> From: "Ahmad Alzaeem" <ahmed.zaeem@xxxxxxxxxxxx>
> To: <squid-users@xxxxxxxxxxxxxxxxxxxxx>
> Subject:  cache peer problem with Https only !!
> Message-ID: <000b01d11b97$6a894e10$3f9bea30$@netstream.ps>
> Content-Type: text/plain; charset="utf-8"
>
> Hi im using pfsense with cache peer
>
> 
>
> Squid version is 3.4.10
>
> 
>
> I have peer proxy on port 80 and I can use it with http and https
>
> Now if I use pfsense in the middle and let pfsense go to remote proxy
> (10.12.0.32  port 80 )
>
> 
>
> And I get internt from the pfsense proxy
>
> 
>
> 
>
> I only have http websites working !!!
>
> 
>
> But https websites don't work
>
> 
>
> Any help ?
>
> 
>
> Here is my pfsnese config :
>
> 
>
> 
>
> # This file is automatically generated by pfSense
>
> # Do not edit manually !
>
> 
>
> http_port 172.23.101.253:3128
>
> icp_port 0
>
> dns_v4_first on
>
> pid_filename /var/run/squid/squid.pid
>
> cache_effective_user proxy
>
> cache_effective_group proxy
>
> error_default_language en
>
> icon_directory /usr/pbi/squid-amd64/local/etc/squid/icons
>
> visible_hostname mne
>
> cache_mgr azaeem@xxxxxx
>
> access_log /var/squid/logs/access.log
>
> cache_log /var/squid/logs/cache.log
>
> cache_store_log none
>
> netdb_filename /var/squid/logs/netdb.state
>
> pinger_enable off
>
> pinger_program /usr/pbi/squid-amd64/local/libexec/squid/pinger
>
> 
>
> logfile_rotate 2
>
> debug_options rotate=2
>
> shutdown_lifetime 3 seconds
>
> # Allow local network(s) on interface(s)
>
> acl localnet src  172.23.101.0/24
>
> forwarded_for off
>
> via off
>
> httpd_suppress_version_string on
>
> uri_whitespace strip
>
> 
>
> acl dynamic urlpath_regex cgi-bin ?
>
> cache deny dynamic
>
> 
>
> cache_mem 64 MB
>
> maximum_object_size_in_memory 256 KB
>
> memory_replacement_policy heap GDSF
>
> cache_replacement_policy heap LFUDA
>
> minimum_object_size 0 KB
>
> maximum_object_size 4 MB
>
> cache_dir ufs /var/squid/cache 100 16 256
>
> offline_mode off
>
> cache_swap_low 90
>
> cache_swap_high 95
>
> cache allow all
>
> 
>
> # Add any of your own refresh_pattern entries above these.
>
> refresh_pattern ^ftp:    1440  20%  10080
>
> refresh_pattern ^gopher:  1440  0%  1440
>
> refresh_pattern -i (/cgi-bin/|?) 0  0%  0
>
> refresh_pattern .    0  20%  4320
>
> 
>
> 
>
> #Remote proxies
>
> 
>
> 
>
> # Setup some default acls
>
> # From 3.2 further configuration cleanups have been done to make things
> easier and safer. The manager, localhost, and to_localhost ACL definitions
> are now built-in.
>
> # acl localhost src 127.0.0.1/32
>
> acl allsrc src all
>
> acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901  3128 3127
> 1025-65535
>
> acl sslports port 443 563 
>
> 
>
> # From 3.2 further configuration cleanups have been done to make things
> easier and safer. The manager, localhost, and to_localhost ACL definitions
> are now built-in.
>
> #acl manager proto cache_object
>
> 
>
> acl purge method PURGE
>
> acl connect method CONNECT
>
> 
>
> # Define protocols used for redirects
>
> acl HTTP proto HTTP
>
> acl HTTPS proto HTTPS
>
> http_access allow manager localhost
>
> 
>
> http_access deny manager
>
> http_access allow purge localhost
>
> http_access deny purge
>
> http_access deny !safeports
>
> http_access deny CONNECT !sslports
>
> 
>
> # Always allow localhost connections
>
> # From 3.2 further configuration cleanups have been done to make things
> easier and safer.
>
> # The manager, localhost, and to_localhost ACL definitions are now
built-in.
>
> # http_access allow localhost
>
> 
>
> request_body_max_size 0 KB
>
> 
>
> 
>
> 
>
> 
>
> delay_access 1 allow allsrc
>
> 
>
> # Reverse Proxy settings
>
> 
>
> 
>
> # Custom options before auth
>
> dns_nameservers 8.8.8.8 10.12.0.33
>
> cache_peer 10.12.0.32  parent 80 0 no-query no-digest no-tproxy proxy-only
>
> 
>
> # Setup allowed acls
>
> # Allow local network(s) on interface(s)
>
> http_access allow localnet
>
> # Default block all to be sure
>
> http_access deny allsrc
>
> 
>
> 
>
> 
>
> 
>
> cheers
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
>
<http://lists.squid-cache.org/pipermail/squid-users/attachments/20151110/9d3
> a01a2/attachment.html>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> squid-users mailing list
> squid-users@xxxxxxxxxxxxxxxxxxxxx
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> ------------------------------
>
> End of squid-users Digest, Vol 15, Issue 26
> *******************************************
>
> _______________________________________________
> squid-users mailing list
> squid-users@xxxxxxxxxxxxxxxxxxxxx
> http://lists.squid-cache.org/listinfo/squid-users

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWQfFFAAoJENNXIZxhPexGJzUH/2fsssCEk/U15HSfAv1ItWEv
IEIejXtS7CRK+TPhRG71qQUqA/WAuPEGg3VPMlbr+PgWJtkhR075SXbuPaG6A/IQ
ql3OXptyo3rh37wQhFV7nsORHZuX8dzF7XhZ0W8eLFoQ0WcMr6wIo1ojauNoJu/0
zG+WvYBoT5VxgXomYAtNKoBfQPj0bHiHDDLWRYAomepSMbl2Bh/xRdnthQo7DV9q
rhKLJw/WOVdYtxBOlKb9jUMPHIkI+NK94JtDcCrdPOK3dKM20OcPGBZ1Ei/xraUh
Yazy5Uw/nzInreJpoFl44Blo5JPtEVwS8YXOh8xzG+cRjUpPkjgg7ABtISGk+IE=
=4e/5
-----END PGP SIGNATURE-----

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux