Search squid archive

Re: Squid DNS Issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Amos,
Yes, you are right!
My internal DNS Stats are as follows:
Nameservers:
IP ADDRESS                                     # QUERIES # REPLIES
---------------------------------------------- --------- ---------
xxx.xxx.xxx.xx                                     51219     46320

You realise there is quite a big lap between the queries and replies.

Other than the NAT errors, queue length errors, and large url warnings
in the config file, I cannot seem to pinpoint why my server develops a
long queue and cannot get most of it's queries resolved by the DNS.
DNS is working well for other squid servers. Shifting users from the
failing squid server to another functioning squid server causes the
functioning squid server to experience the same issues.

What is interesting though, is that no sooner have I started my squid,
than I get queue congestion warning and numerous NAT warnings.



On Tue, Jun 28, 2011 at 2:07 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
> On 28/06/11 22:45, Richard Zulu wrote:
>>
>> Thank you Amos,
>>
>> On Tue, Jun 28, 2011 at 2:17 AM, Amos Jeffries<squid3@xxxxxxxxxxxxx>
>>  wrote:
>>>
>>> On Mon, 27 Jun 2011 08:05:59 +0300, Richard Zulu wrote:
>>>>
>>>> Hey,
>>>> I have squid version 3.1.9 working as a web forward proxy serving
>>>> close to 500 users with over 54000 requests every other day.
>>>> However, of recent, it is failing to communicate with the DNS Server
>>>> completely which leads to few requests being completed.
>>>> This has led to a long queue as to the requests supposed to be
>>>> completed which later causes squid to hang.
>>>> Shifting the very users to another squid cache causes similar
>>>> problems. What could be the issue here?
>>>> Some of the errors generated in the cache.log are here below:
>>
>> The NAT Failure below and the queue congestion is causing my proxy
>> server to hang.
>
> Hang? the queue congestion is an exponential queue size increase each time
> the warning appears.
>
> I don't think those two would lead to that (maybe, but I don't think so).
> Slower than normal access times on every request, sure, but not a hang.
>
> The absence of DNS responses would lead to a hang. So getting back to that.
> Do you have any clues about why Squid may not be able to communicate with
> it? DNS is critical like having the cables plugged in.
>
>>
>> However, I have read the link, I DNAT all the traffic to port 80 for
>> my users to my proxyserver
>> All the users surf using private IPs on their machines with one public
>> IP on the gateway, which is where i do the DNAT to squid.
>> How best can i separate normal traffic from NATTED traffic to my squid
>> on my gateway and what might be causing NON-Natted traffic to show up
>> in my proxy, is it a NAT Vulnerability?
>
> Ouch. The NAT port change has to be done one the Squid box to retain the
> destination IP properly.
>  I recommend looking into policy routing the port 80 packets to the Squid
> box. Then doing the DNAT step on the Squid box.
>  http://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute
>
>>
>>>> getsockopt(SO_ORIGINAL_DST) failed on FD 128:
>>>
>>>  NAT failure.
>>>
>>> Could be a couple of things. Some seriously bad, and some only trivial.
>>>
>>>  * On Linux if you allow non-NAT clients to access a port marked
>>> "intercept"
>>> or "transparent". The ports for direct client->proxy and NAT connections
>>> need to be separate and the NAT one firewalled away so it cant be
>>> accessed
>>> directly. See the squid wiki config examples for DNAT or REDIRECT for the
>>> iptables "mangle" rules that protect against these security
>>> vulnerabilities.
>>>  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
>>>  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect
>>>
>>>  * On OpenBSD 4.7 or later (may or may not need some patches) it can be
>>> the
>>> same as Linux. OR if they have partial but broken SO_ORIGINAL_DST support
>>> it
>>> shows up but means only that the OS is broken.
>>>
>>>  * On other non-Linux systems it is a Squid bug. Means nothing, but I
>>> want
>>> to get it fixed/silenced.
>>>
>>>
>>>> squidaio_queue_request: WARNING - Queue congestion
>>>
>>> http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion
>>>
>>>
>>>> urlParse: URL too large (12404 bytes)
>>>
>>> Exactly what it says. URL is too big for Squid to handle. There should be
>>> a
>>> 4xx status sent back to the client so it can retry or whatever.
>>>
>>>
>>>> statusIfComplete: Request not yet fully sent "POST
>>>>
>>>>
>>>>
>>>> http://person.com/ims.manage.phtml?__mp[name]=ims:manage&action=bugreport&js_id=47&";
>>>
>>> Server or client disconnected halfway through a POST request.
>>>
>>>
>>>>  WARNING: unparseable HTTP header field {Web Server}
>>>
>>> http://wiki.squid-cache.org/KnowledgeBase/UnparseableHeader
>>>
>>> Amos
>>>
>
>
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.9 and 3.1.12.3
>



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux