Hi,
Thanks for your reply. The following is the ip and abbreviated msg:
(reason: 554 5.7.1 Service unavailable; Client host [65.24.5.137] blocked
using dnsbl-1.uceprotect.net;
To my squid issue, if aufs is less intensive and more efficient i'll
definitely switch over to it. As for your suggestion about splitting in to
multiple files I believe the version i have can do this, it has multiple acl
statements for the safe_ports definition. My issue though is there's like
15000+ lines in this file, and investigating some like 500 are duplicates.
I'd rather not have to manually go through this and do the split, is there a
way i can split based on the dst, dstdomain, or url_regexp you referenced?
I'll check out squidclient output, but last time i did i didn't see anything
that stood out.
Thanks.
Dave.
----- Original Message -----
From: "Amos Jeffries" <squid3@xxxxxxxxxxxxx>
To: "Dave" <dmehler26@xxxxxxxxxx>
Cc: <squid-users@xxxxxxxxxxxxxxx>
Sent: Tuesday, July 03, 2007 7:42 AM
Subject: Re: FreeBSD Squid timeout issue
Dave wrote:
Hello,
Thanks to everyone who has offered suggestions with this issue. To
squid3@xxxxxxxxxxxxx i tried a direct email, but your email server
rejected my msg.
Ah, maybe you have other problems to. My squid3 address is only protected
from current spam sources. You'll have to check the bounce message to see
what it was the mailing list hides your source info for me to look up from
this end and whitelist you..
I am getting a warning with my porn rejectionslist, which only occurs
when the configuration is changed from url_regexp to dstdomain, that
subdomains are not valid. The file itself is at:
http://www.davemehler.com/porn.gz
Your have rather a mix of content to that file. TO be fast and well
handled I would suggest breaking it into three parts in the squid config.
Like so:
acl porn dst '.../porn.ips'
acl porn dstdomain '.../porn.domains'
acl porn url_regex '.../porn.regex'
I'm not sure if all versions of squid can take one acl with multiple
types. If it does not, they may need different names.
Where the:
*.ips gets the lines like '192.168.0.0'
*.domains gets lines like '.zugs-model-portal.com'
*.regex gets lines with '=female+wrestling', etc.
(note the preceding '.' in dstdomain, it wil catch any sub-domain
funkiness they try.)
That way each line is handled by an appropriate ACL, and most of them have
fast types.
I thought that would be easier than trying to push an attachment
through to the list to everyone.
I'm also wondering if my cache replacement policy is wrong, old items
don't seem to be being removed, even though the cache still has 81 mb
before its full.
If the rest of my config would be helpful i'll post it.
You posted a copy of it 26 June, if its changed it might be worth a look
at the new version. Otherwise, I just took a look back at that and diskd
is one of the filesystems I thought was unused these days. aufs if its
available is easier on the disk.
I just noticed you have an object size of 0 accepted, I wonder of the
'old' objects are those ones which have no headers to math age against (or
I might be talking garbage here, I really don't know much about the
stores).
Hmm, have you checked out all the stats/settings squidclient can give you?
('squidclient mgr:menu' for a list, try the store-related entries.
ie 'squidclient mgr:storedir' to see the LRU policy stats)
Thanks.
Dave.
----- Original Message ----- From: <squid3@xxxxxxxxxxxxx>
To: "Dave" <dmehler26@xxxxxxxxxx>
Cc: <squid3@xxxxxxxxxxxxx>; <squid-users@xxxxxxxxxxxxxxx>
Sent: Thursday, June 28, 2007 6:50 PM
Subject: Re: FreeBSD Squid timeout issue
Hello,
Thanks for your suggestions. I checked my squid.conf and the acls
for
chat and spyware were of type dstdomain, porn was url_regexp, i changed
that
to dstdomain and now when i do a squid -k reconfigure i am getting
syntax
errors. AS for the file sizes chat has 2 lines, spyware has 1440 lines,
and
of course the big one the porn rejection file has 15025 lines.
Oh, aye, that way huge for regexp to handle.
The error
i'm
repeatedly getting now and i didn't get it when the file was url_regexp
was
that i have subdomains of parent domains and they are ignored.
Hmm, sure this is an error? not a warning? It sound to me like a little
maintenance needs doing on that file.
- Duplicates can be removed.
- 'example.com can' be removed if you have '.example.com' elsewhere.
- 'www.example.com' can be removed if you have '.example.com' elsewhere.
Sounds like the last of these two are what you are being warned about.
If your still having trouble you can email me the file and I'll check it
myself.
Does anyone
use spyware, porn, and chat rejections, and if so where did you obtain
them?
Also, i'm wondering why my cache isn't clearing out the oldest
items,
is
my cache replacement policy bad?
Quite possibly, my squid expertise doesn't extend into the replacement
policies, yet. You will have to look to one oef the others for help.
Thanks.
Dave.
----- Original Message -----
From: <squid3@xxxxxxxxxxxxx>
To: <squid-users@xxxxxxxxxxxxxxx>
Sent: Tuesday, June 26, 2007 9:27 PM
Subject: Re: FreeBSD Squid timeout issue
Hello,
Thanks for all replies.
I've got a good hard disk, i've been checking that and haven't
found
any
problems or seen any error msgs in my logs.
I've adjusted my high cache size from 100% to 95% but i'm
starting
to
look at is squid purging oldest items from my cache? It seems like
when
the
cache gets full or nearly so i start having this issue?
As for my pornography and spyware rejection files they are each a
considerable size, they are lists of sites i don't want visited,
downloaded,
or to have anything to do with. If there's a way to speed this up i'm
all
for it.
Thanks.
Dave.
Make sure that you are using dst or dstdomain as the ACL types on teh
lareg lists instead of regex.
The regex is quite slow and large lists often become a drag. After
splitting the lists into 'need regex' and dstdomain eth speed increase
is
still often worth the extra time spent maintaining two lists.
Make sure there is extra space on the cache disk. All the tutorials
mention making the cache 60%-80% of drive size. I can't recall what
the
exact reasons were but it had something to do with OS-level handling
on
the drive.
Amos