On 2014-02-13 09:47, Carlos Defoe wrote:
Hello,
is there a way to be sure that some objects will be cached?
I'm trying to cache this image blog:
http://lustik.tumblr.com
I configured one refresh_pattern line to match all tumblr, with some
options that, as far as I undestood, will agressively try to cache it.
####
# REFRESH_PATTERNS
####
refresh_pattern -i tumblr.com 2880 90% 7200 override-expire
override-lastmod ignore-no-store ignore-reload ignore-private
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
No luck. All I get with this is always TCP_MISS/200, for all objects.
E.g:
TCP_MISS/200 172504 GET
http://24.media.tumblr.com/967c977f757bc64f9e10184acc934bd2/tumblr_n0qsckwQA31qztdg6o4_500.jpg
I tried to load that page on different browsers, and different
machines, but the objects are never cached. Why is that? What can I
do?
What Squid version? Its a HIT for me with default cache settings, with
the current latest version (3.4.3).
HTTP/1.1 200 OK
Accept-Ranges: bytes
Cache-Control: max-age=31536000
Content-Type: image/jpeg
Date: Wed, 12 Feb 2014 21:08:36 GMT
ETag: "1b6df41f754d349a0b3d9314d71431ee"
Last-Modified: Sun, 09 Feb 2014 18:50:50 GMT
Server: ECS (syd/EBBD)
<snip>
X-Cache: HIT
Content-Length: 171808
Age: 11
X-Cache: HIT from treenet.co.nz
Note that the max-age (1 year) from server is significantly larger than
your 7200 minute (5 day) limit. So your rule will be *shortening* the
object storage time if/when it works.
If my disk cache is already full, the behavior should be to keep the
objects that are already stored, or delete the oldest and store those
new? I mean, this could be caused by a full cache dir?
Maybe, but the space clearing removals also depends on time since last
use. So this should not happen during testing.
Amos