Muhammad Sharfuddin wrote:
On Mon, 2009-08-24 at 17:05 +1200, Amos Jeffries wrote:
On Mon, 24 Aug 2009 10:24:41 +0600, Muhammad Sharfuddin
<m.sharfuddin@xxxxxxxxxx> wrote:
Note: the netfilter guys recommend using the iptables-restore tool for
firewall setup. It's much faster and much more secure than an incremental
build of the rules like this.
Ok, I will try.
NOTE: The following rules only apply to external people attempting to
connect to your internal LAN machines.
... Or to people using your proxy as a free gateway to elsewhere on the
Internet.
They can do that to your proxy by simply sending an HTTP request to any one
of your internal LAN IPs with a forged HTTP header and URL.
I think only the following rule is for anyone(internal/external)
acl allowed_for_all url_regex -i "/etc/squid/allowed_for_all.txt"
http_access allow allowed_for_all
acl ftp_site url_regex -i ftp://ftp.sight-board.de
http_access allow ftp_site
all the other rules are *only* for specific machines/IPes e.g
acl hod_ip src "/etc/squid/ipes/hod_ip.txt"
http_access allow hod_ip
acl cad_ip src "/etc/squid/ipes/cad_ip.txt"
http_access deny cad_ip
acl hod_tl_ip src "/etc/squid/ipes/hod_and_tl_ip.txt"
http_access allow hod_tl_ip
So I really dont understand why you said/wrote 'The following rules
*ONLY* apply to external people'
Because you "allow localnet" (AKA unrestricted access to all internal
client) before doing those rules.
Regardless of what they are they will only be tested against the
requests coming from outside your "localnet" defined local network ranges.
cache_dir diskd /var/cache/squid 50000 16 256
diskd is probably your problem.
From the use of iptables as a firewall I would guess that this is a linux
box. On linux you should try AUFS storage for fastest speed.
If that label is the only change on the config line you can test it with a
simple re-config.
well same results with aufs.
you are recommending aufs over diskd, and the following url suggest 'diskd'
as the the store type of choice for the Cache-off's
http://www.linuxsa.org.au/pipermail/linuxsa/2004-June/070228.html
Written in 2004. Server CPU threading has come a long way since then.
diskd is a single-threaded helper application, with a processing IO
_upper_ limit of 1 file read at a time. Squid itself does not block, but
the helper reads/writes are blocking each other _within the helper_.
AUFS is a multi-threaded component utilizing the kernel and all
available CPU for non-blocking read/write to as many files as needed
simultaneously. Limits are defined by the available FD in Squid and
system CPU capabilities.
diskd is only recommended for use on *BSD systems where AUFS support is
not available (yet).
Also with ~50GB of storage you are probably wanting to use something like
32 or 64 for the Level-1 value (currently 16). Changing that requires a
cache delete and rebuild with 'squid -z' though.
whats the rule/formula for Level-1 and Level-2 value ? is it related
with storage size ?
Yes. OS used to have an upper limit on the number of files stored in a
single directory. I think most still do for the common filing systems.
Between them these numbers define how many folders are used in the
cache. Smaller caches only need a few folders, bigger caches need a lot
more to keep the OS happy.
The default squid.conf comes tuned for a 200MB cache. Quite small for
any real use. When you are heading into tens of GB its a good idea to
start upping these numbers. How much depends on your OS filesystem and
avg. object size in the cache. Big and huge objects obviously reduce the
pressure for extra folders.
These days I'm advising people terminate their file extension patterns with
(\?.*)?$ instead of just $ to catch all the sites using dynamic parts in
their URLs.
you mean the following ?
(\?.swf)?$
(\?.mdi)?$
e.g
refresh_pattern -i (\?.swf)?$ 43200 100% 43200 override-lastmod
override-expire
No, no.
This:
refresh_pattern -i \.swf(\?.*)?$ ....
Amos
--
Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE18
Current Beta Squid 3.1.0.13