Hi All, Is it normal for squid to not cache files from a backend webserver if the url is protected by a .htaccess file with basic auth? Im running squid 2.6.5-6etch1 with conf: http_port 80 vhost cache_peer localhost parent 1234 0 originserver default login=PASS icp_port 0 hierarchy_stoplist cgi-bin ? cache_mem 1024 MB maximum_object_size 4096 KB maximum_object_size_in_memory 2048 KB cache_dir ufs /var/spool/squid 1024 16 256 logformat combined %>a %ui %un [%tl] [%tr] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh access_log /var/log/squid/access.log combined emulate_httpd_log on log_mime_hdrs on refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 connect_timeout 20 seconds acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl Safe_ports port 80 # http acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access allow localhost http_access allow all http_reply_access allow all cachemgr_passwd xx all strip_query_terms off coredump_dir /var/spool/squid Basically when I access a url with out the .htaccess in place I can get a TCP_MEM_HIT:NONE But as soon as I place the .htaccess back in place I never do. Is there a way around this? I have numerous largeish files sitting being served from a NFS mount which would be much better off served from memory but the files need to be behind some auth mechanism. Cheers Brent