Search squid archive

Re: url length limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gregori Parker wrote:
So this has already been changed to 8192 bytes in the current
3.0-STABLE10 ?

Yes.

 I'd probably be willing to try that build, however these
are production servers, so I'm skeptical towards trying bleeding-edge
versions.

Fair enough.

 3.1.0.1 would be a very hard sell - can point me towards some
reading material on specific enhancements towards memory usage, diskless
operation, reverse-proxy, etc in the v3 branches?

Theres no specific documentation. Squid-3 is equivalent to Squid-2 excepting some the features not yet ported up.

For the diskless state of 3.1 its largely a side effect of squid.conf adjustments toward reasonable modern network object sizes. Whether or not disk is used, but configured to the fast diskless state now for 'just works' operation.

Most of the setting changes can be configured manually in older squid.conf. The values are all listed in the commit message:
  http://www.squid-cache.org/Versions/v3/3.1/changesets/b9208.patch

(NOTE: the "cache_dir null" setting is still explicitly required for 3.0 as for 2.x)

Amos


-----Original Message-----
From: Amos Jeffries [mailto:squid3@xxxxxxxxxxxxx] Sent: Thursday, November 06, 2008 7:37 PM
To: Gregori Parker
Cc: squid-users@xxxxxxxxxxxxxxx
Subject: Re:  url length limit

Gregori Parker wrote:
Hi all - I am using an array of squid servers to accelerate dynamic
content, running 2.6.22 and handling a daily average of about 400
req/sec across the cluster.  We operate diskless and enjoy a great hit
rate (>80%) on very short-lived content.

About 50+ times per day, the following appears in my cache.log:

squid[735]: urlParse: URL too large (4738 bytes)
squid[735]: urlParse: URL too large (4470 bytes)
squid[735]: urlParse: URL too large (4765 bytes)
...

I understand that Squid is configured at compile time to cut off URLs
larger than 4096 bytes, as defined by MAX_URL in src/defines.h, and
that
changing this has not been tested.  Nevertheless, since I am expecting
very long URLs (all requests are long query strings, responses are
SOAP/XML), and the ones getting cutoff are not severely over the
limit,
I would like to explore this change further.

Has anyone redefined MAX_URL in their squid setups?   Do these 'URL
too
large' requests get logged?  If not, is there a way I could get Squid
to
tell me what the requests were so that I can verify that we have an
operational need to increase the URL limit?

It has been tried at 8192 with no sign of trouble in Squid-3.
If your URL get much large than that, we really do need it checked up as

high as 128KB so feel free to build with larger values, just please report back how it goes (particularly if good news).
http://www.squid-cache.org/bugs/show_bug.cgi?id=2267

I'd suggest experimenting with squid 3.1.0.1 to see if its usable in your setup. The URL limits have been raised to 8KB already and diskless operation is much more polished and native.


As for logging the URI, most things in squid are dropped when they are found to overflow the buffers like that. The details may be logged to cache.log when debug_options is set to the right section and level. I'm not sure right now which one is relevant to 2.6 though, there are a few available.
http://wiki.squid-cache.org/KnowledgeBase/DebugSections


Amos


--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.1

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux