# fragment anchors is not valid within HTTP. These are used by the user-agent only to select the starting point in the retreived page, not by servers or proxies, and must be stripped off before the URL is sent in HTTP. Example: Requests for both http://www.example.com/test.html#a and http://www.example.com/test.html#b is both the same url in HTTP http://www.example.com/test.html Browsers know to do this transformation, and it can be argued that cURL should as well... squidclient is intentionally too dumb to do this kind of transformations. It's a test & debug client, not a full HTTP user agent. Regards Henrik On fre, 2008-10-03 at 07:59 -0700, robdaugherty@xxxxxxxxx wrote: > I have squid running in two separate configurations, one as a web > accelerator, the other as a forward proxy. When I request a url such > as http://www.mydomain.com/ from each squid I get a positive > response. When I modify that url to http://www.mydomain.com/#test the > web accelerator instance again works fine. However, sending that > modified URL to the forward proxy using squidclient (and also cURL via > PHP) I get a 400 Bad Request error returned. > > Further testing shows this happens with any URL that contains a pound/ > hash sign through the forward proxy, but never with the web > accelerator. Such urls work correctly via a browser pointed to the > proxy, but they need to work when requests are made via cURL or > squidclient. I've reviewed the Squid configs and don't see anything > out of the ordinary. > > Does anyone know if this is a known issue? Can anyone else re-create > the issue in their own configuration? I can generate a workaround > either with rewriters or further upstream before the URL goes out - > but it seems like a bug to me. > > -Rob
Attachment:
signature.asc
Description: This is a digitally signed message part