Thanks for the reply Amos.
Just to clarify my intentions about this setup.
I my current setup all requests is sent to and handled by the CMS's
"index.php".
Since over 90% of the requests is anonymous users which always get the
same output I want accelerate this part. All authentication is handled
by the CMS, and is not something Squid needs to think about.
Amos Jeffries wrote:
Rune Langseid wrote:
Hi there,
I'm trying to set up a reverse-proxy in front of an CMS.
Ok.
Since there are both anonymous users, and logged in users accessing this
system I only want to serve squid-cache to the anonymous users.
WHY? Surey yoru authenticated users are allowed to do the same things
as anonymous (plus some extras only for them)
I was thinking of setting a cookie called "Authenticated" when the user
logs in as this is easy to check with "acl"
Like this:
acl cookie_is_set req_header Cookie ^.*Authenticated.*
However, I am facing a few problems then trying to let these user
through Squid.
I have tried this:
acl cookie_is_set req_header Cookie ^.*Authenticated.*
cache deny cookie_is_set
This setup works in a way, but Squid also deletes the cache-object used
by the anonymous users which is something I don't want. Could this be
a bug?
Sort-of. Squid does not fully support HTTP/1.1 - this type of URL
differences are part of the HTTP/1.1 support.
If its still occuring when squid is apparently HTTP/1.1 compliant then
it will be a bug.
I have also been experimenting with "always_direct" with no luck.
Like this:
always_direct allow cookie_is_set
I guess I am missing something important here...
always_direct bypasses the peer config you have for matching requests.
Is there another way to solve this?
Eg by using cache_peer and cache_peer_access?
Possibly. If you wanted one or other type of user not to be able to
request new content from the peer.
What I suspect is that you actually want to use:
miss_access deny !cookie_is_set
That way authenticated users are allowed to pull new objects int the
cache. and non-authenticated are only allowed to use those already
pulled in by somebody else.
Your CMS should be setting Expires: and Cache-Control: headers
properly to control which items are storable in the cache and which
need to be refreshed.
You purposed this setting;
miss_access deny !cookie_is_set
This will stop requests which is not authenticated (cookie does not
exist) which is something I don't want.
Anonymous users should generate cache if does not exists, or if it is
expired.
If the cookie exists Squid should just pass the request along to
index.php as if the cache did not exist.
The "cache deny cookie_is_set" does this, but it also releases the cache
for the page used by anonymous user.
I guess I'll end up modifying this function not to case a "RELEASE" on
the cache by patching Squid. This is not an ideal solution, but if it
works for now it is enough.
I'm using Squid 2.6.5, and my squid.conf looks like this:
--
cache_peer localhost parent 80 0 originserver
http_port 8080 vhost
acl cookie_is_set req_header Cookie ^.*Authenticated.*
cache deny cookie_is_set
#always_direct allow cookie_is_set
acl all src 0.0.0.0/0.0.0.0
http_access allow all
So by that config I read that you want ANYBODY to be able to access
any website through your proxy at port 8080?
I'd use:
- cache_peer_domain or cache_peer_access to limit the possible
destinations to those your peer provides.
- never_direct to prevent requests the CMS is not meant to provide
Also cookie's can be forged or spoofed easily. If you can try to setup
some proper authentication. The auth_param programs can be an easily
hand-coded script to test USER/PASS against anything you want.
That also allows the users use their familiar browser login controls
for your site.
The ports in this example of squid.conf was easy to misunderstand I guess.
When everything is settled Squid will of course run on port 80, and
Apache will be on port 8080 (just to pick one).
Regards,
Rune